Home
Jobs
Companies
Resume

202 Rds Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 6.0 years

6 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will be responsible for designing, building, and maintaining scalable, secure, and reliable AWS cloud infrastructure. This is a hands-on engineering role requiring deep expertise in Infrastructure as Code (IaC), automation, cloud networking, and security . The ideal candidate should have strong AWS knowledge and be capable of writing and maintaining Terraform, CloudFormation, and CI/CD pipelines to streamline cloud deployments. Please note, this is an onsite role based in Hyderabad. Roles & Responsibilities: AWS Infrastructure Design & Implementation Architect, implement, and manage highly available AWS cloud environments . Design VPCs, Subnets, Security Groups, and IAM policies to enforce security standard processes. Optimize AWS costs using reserved instances, savings plans, and auto-scaling . Infrastructure as Code (IaC) & Automation Develop, maintain, and enhance Terraform & CloudFormation templates for cloud provisioning. Automate deployment, scaling, and monitoring using AWS-native tools & scripting. Implement and manage CI/CD pipelines for infrastructure and application deployments. Cloud Security & Compliance Enforce standard processes in IAM, encryption, and network security. Ensure compliance with SOC2, ISO27001, and NIST standards. Implement AWS Security Hub, GuardDuty, and WAF for threat detection and response. Monitoring & Performance Optimization Set up AWS CloudWatch, Prometheus, Grafana, and logging solutions for proactive monitoring. Implement autoscaling, load balancing, and caching strategies for performance optimization. Solve cloud infrastructure issues and conduct root cause analysis. Collaboration & DevOps Practices Work closely with software engineers, SREs, and DevOps teams to support deployments. Maintain GitOps standard processes for cloud infrastructure versioning. Support on-call rotation for high-priority cloud incidents. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 4 to 6 years of experience in computer science, IT, or related field with hands-on cloud experience OR Bachelors degree and 6 to 8 years of experience in computer science, IT, or related field with hands-on cloud experience OR Diploma and 10 to 12 years of experience in computer science, IT, or related field with hands-on cloud experience Must-Have Skills: Deep hands-on experience with AWS (EC2, S3, RDS, Lambda, VPC, IAM, ECS/EKS, API Gateway, etc.) . Expertise in Terraform & CloudFormation for AWS infrastructure automation. Strong knowledge of AWS networking (VPC, Direct Connect, Transit Gateway, VPN, Route 53) . Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (Jenkins, GitHub Actions, CodePipeline, etc.) . Strong troubleshooting and debugging skills in cloud networking, storage, and security . Preferred Qualifications: Good-to-Have Skills: Experience with Kubernetes (EKS) and service mesh architectures . Knowledge of AWS Lambda and event-driven architectures . Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP) . Familiarity with HPC, DGX Cloud . Professional Certifications (preferred): AWS Certified Solutions Architect Associate or Professional AWS Certified DevOps Engineer Professional Terraform Associate Certification Soft Skills: Strong analytical and problem-solving skills. Ability to work effectively with global, virtual teams Effective communication and collaboration with cross-functional teams. Ability to work in a fast-paced, cloud-first environment.

Posted 6 days ago

Apply

9.0 - 14.0 years

5 - 12 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Naukri logo

Role & responsibilities Experience looking only 9+ Years Mandatory skills - AWS Cloud services with Python, AWS Lambda, DynamoDB, RDS, AppSync, RESTful APIs Desired Skill- CI/CD pipelines in GitLab Cloud Domain - Utilities 1. AWS Cloud services with Python and its frameworks such as Django on the backend 2. Cloud - AWS such as Lambda, DynamoDB, RDS, AppSync. 3. Experience working with RESTful APIs and/or GraphQl 4. Good understanding of development best practices such as pair programming, TDD 5. Work in an agile only interested candidate share me there resume in recruiter.wtr26@walkingtree.in

Posted 1 week ago

Apply

5.0 - 9.0 years

11 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 5 to 9+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 week ago

Apply

6.0 - 10.0 years

11 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 week ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Hyderabad

Remote

Naukri logo

As a Lead Engineer, you will play a critical role in shaping the technical direction of our projects. You will be responsible for leading a team of developers undertaking Creditsafe s digital transformation to our cloud infrastructure on AWS. Your expertise in Data Engineering, Python and AWS will be crucial in building and maintaining high-performance, scalable, and reliable systems. Key Responsibilities: Technical Leadership: Lead and mentor a team of engineers, providing guidance and support to ensure high-quality code and efficient project delivery. Software Design and Development: Collaborate with cross-functional teams to design and develop data-centric applications, microservices, and APIs that meet project requirements. AWS Infrastructure: Design, configure, and manage cloud infrastructure on AWS, including services like EC2, S3, Lambda, and RDS. Performance Optimization: Identify and resolve performance bottlenecks, optimize code and AWS resources to ensure scalability and reliability. Code Review: Conduct code reviews to ensure code quality, consistency, and adherence to best practices. Security: Implement and maintain security best practices within the codebase and cloud infrastructure. Documentation: Create and maintain technical documentation to facilitate knowledge sharing and onboarding of team members. Collaboration: Collaborate with product managers, architects, and other stakeholders to deliver high-impact software solutions. Research and Innovation: Stay up to date with the latest Python, Data Engineering and AWS technologies, and propose innovative solutions that can enhance our systems. Troubleshooting: Investigate and resolve technical issues and outages as they arise. Qualifications: Bachelor's or higher degree in Computer Science, Software Engineering, or a related field. Proven experience as a Data Engineer with a strong focus on AWS services. Solid experience in leading technical teams and project management. Proficiency in Python, including deep knowledge of data engineering implementation patterns. Strong expertise in AWS services and infrastructure setup. Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus. Excellent problem-solving skills and the ability to troubleshoot complex technical issues. Strong communication and teamwork skills. A passion for staying updated with the latest industry trends and technologies.

Posted 1 week ago

Apply

5.0 - 8.0 years

8 - 13 Lacs

Pune

Work from Office

Naukri logo

We are staffing small, self-contained development teams with people who love solving problems, building high quality products and services. We use a wide range of technologies and are building up a next generation microservices platform that can make our learning tools and content available to all our customers. If you want to make a difference in the lives of students and teachers and understand what it takes to deliver high quality software, we would love to talk to you about this opportunity. Technology Stack You'll work with technologies such as Java, Spring Boot, Kafka, Aurora, Mesos, Jenkins etc. This will be a hands-on coding role working as part of a cross-functional team alongside other developers, designers and quality engineers, within an agile development environment. Were working on the development of our next generation learning platform and solutions utilizing the latest in server and web technologies. Responsibilities: Build high-quality, clean, scalable, and reusable code by enforcing best practices around software engineering architecture and processes (Code Reviews, Unit testing, etc.) on the team. Work with the product owners to understand detailed requirements and own your code from design, implementation, test automation and delivery of high-quality product to our users. Drive the design, prototype, implementation, and scale of cloud data platforms to tackle business needs. Identify ways to improve data reliability, efficiency, and quality. Plan and perform development tasks from design specifications. Provide accurate time estimates for development tasks. Construct and verify (unit test) software components to meet design specifications. Perform quality assurance functions by collaborating with the cross-team members to identify and resolve software defects. Provide mentoring on software design, construction, development methodologies, and best practices. Participate in production support and on-call rotation for the services owned by the team. Mentors less experienced engineers in understanding the big picture of company objectives, constraints, inter-team dependencies, etc. Participate in creating standards and ensuring team members adhere to standards, such as security patterns, logging patterns, etc. Collaborate with project architects and cross-functional team members/vendors in different geographical locations and assist team members to prove the validity of new software technologies. Promote AGILE processes among development and the business, including facilitation of scrums. Have ownership over the things you build, help shape the product and technical vision, direction, and how we iterate. Work closely with your product and design teammates for improved stability, reliability, and quality. Perform other duties as assigned to ensure the success of the team and the entire organization. Run numerous experiments in a fast-paced, analytical culture so you can quickly learn and adapt your work. Promote a positive engineering culture through teamwork, engagement, and empowerment. Function as the tech lead for various features and initiatives on the team. Build and maintain CI/CD pipelines for services owned by team by following secure development practices. Skills & Experience: 5 to 8 years' experience in a relevant software development role Excellent object-oriented design & programming skills, including the application of design patterns and avoidance of anti-patterns. Strong Cloud platform skills: AWS Lambda, Terraform, SNS, SQS, RDS, Kinesis, DynamoDB etc. Experience building large-scale, enterprise applications with ReactJS/AngularJS. Proficient with front-end technologies, such as HTML, CSS, JavaScript preferred. Experience working in a collaborative team of application developers and source code repositories. Deep knowledge of more than one programming language like Node.js/Java. Demonstrable knowledge of AWS and Data Platform experience: Lambda, Dynamodb, RDS, S3, Kinesis, Snowflake. Demonstrated ability to follow through with all tasks, promises and commitments. Ability to communicate and work effectively within priorities. Ability to advocate ideas and to objectively participate in design critiques. Ability to work under tight timelines in a fast-paced environment. Advanced understanding of software design concepts. Understanding software development methodologies and principles. Ability to solve large scale complex problems. Ability to architect, design, implement, and maintain large scale systems. Strong technical leadership and mentorship ability. Working experience of modern Agile software development methodologies (i.e. Kanban, Scrum, Test Driven Development)

Posted 1 week ago

Apply

3.0 - 6.0 years

6 - 8 Lacs

Pune

Work from Office

Naukri logo

Software Engineering at HMH is focused on building fantastic software to meet the challenges facing teachers and learners, enabling and supporting a wide range of next generation learning experiences. We design and build custom applications and services used by millions. We are creating teams full of innovative, eager software professionals to build the products that will transform our industry. We are staffing small, self-contained development teams with people who love solving problems, building high quality products and services. We use a wide range of technologies and are building up a next generation microservices platform that can make our learning tools and content available to all our customers. If you want to make a difference in the lives of students and teachers and understand what it takes to deliver high quality software, we would love to talk to you about this opportunity. Technology Stack You'll work with technologies such as Java, Spring Boot, Kafka, Aurora, Mesos, Jenkins etc. This will be a hands-on coding role working as part of a cross-functional team alongside other developers, designers and quality engineers, within an agile development environment. Were working on the development of our next generation learning platform and solutions utilizing the latest in server and web technologies. Responsibilities: Build high-quality, clean, scalable, and reusable code by enforcing best practices around software engineering architecture and processes (Code Reviews, Unit testing, etc.) on the team. Work with the product owners to understand detailed requirements and own your code from design, implementation, test automation and delivery of high-quality product to our users. Identify ways to improve data reliability, efficiency, and quality. Perform development tasks from design specifications. Construct and verify (unit test) software components to meet design specifications. Perform quality assurance functions by collaborating with the cross-team members to identify and resolve software defects. Participate in production support and on-call rotation for the services owned by the team. Adhere to standards, such as security patterns, logging patterns, etc. Collaborate with cross-functional team members/vendors in different geographical locations to ensure successful delivery of the product features Have ownership over the things you build, help shape the product and technical vision, direction, and how we iterate. Work closely with your teammates for improved stability, reliability, and quality. Perform other duties as assigned to ensure the success of the team and the entire organization. Run numerous experiments in a fast-paced, analytical culture so you can quickly learn and adapt your work. Build and maintain CI/CD pipelines for services owned by team by following secure development practices. Skills & Experience: 3 to 6 years' experience in a relevant software development role Excellent object-oriented design & programming skills, including the application of design patterns and avoidance of anti-patterns. Strong Cloud platform skills: AWS Lambda, Terraform, SNS, SQS, RDS, Kinesis, DynamoDB etc. Experience building large-scale, enterprise applications with ReactJS/AngularJS. Proficient with front-end technologies, such as HTML, CSS, JavaScript preferred. Experience working in a collaborative team of application developers and source code repositories. Deep knowledge of more than one programming language like Node.js/Java. Demonstrable knowledge of AWS and Data Platform experience: Lambda, Dynamodb, RDS, S3, Kinesis, Snowflake. Demonstrated ability to follow through with all tasks, promises and commitments. Ability to communicate and work effectively within priorities. Ability to work under tight timelines in a fast-paced environment. Understanding software development methodologies and principles. Ability to solve large scale complex problems. Working experience of modern Agile software development methodologies (i.e. Kanban, Scrum, Test Driven Development)

Posted 1 week ago

Apply

6.0 - 10.0 years

11 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 week ago

Apply

5.0 - 10.0 years

8 - 13 Lacs

Pune

Work from Office

Naukri logo

We are seeking a highly skilled Senior Infrastructure Engineer with expertise in Windows and Linux operating systems, Azure and AWS cloud platforms, and enterprise-grade infrastructure solutions. The ideal candidate will have a proven track record of designing, implementing, and managing complex infrastructure in a dynamic environment. Key Responsibilities: 1. Systems and Infrastructure Management: Design, deploy, and maintain Windows and Linux-based systems in both on-premises and cloud environments. Ensuring the reliability, security, and performance of the systems. Administer core services such as Active Directory, DHCP, DNS, and Group Policy. 2. Cloud Infrastructure: Architect, implement, and manage Azure infrastructure components, including virtual machines, virtual networks, storage accounts, Azure AD(Entra ID), and enterprise app registrations. Manage AWS cloud resources, including EC2 instances, S3 buckets, RDS databases, and IAM roles. 3. Virtualization and Storage Solutions: Utilize VMware and Windows hypervisor technologies to design, deploy, and manage virtualized environments for performance and scalability. Design and maintain enterprise backup and storage solutions, including Rubrik and PureStorage. 4. Automation and Orchestration: Automate infrastructure provisioning, configuration, and deployment using tools like Terraform, Ansible, PowerShell, and Azure Automation. 5. Monitoring and Performance Optimization: Monitor system performance using tools such as DataDog and LogicMonitor. Implement performance tuning measures to enhance infrastructure reliability and efficiency. 6. Security and Compliance: Implement security best practices, access controls, and compliance measures in line with SOC2 and SOX standards. 7. Collaboration and Innovation: Work with cross-functional teams to design scalable, secure, and highly available infrastructure solutions. Share expertise and mentor team members through documentation and training sessions. 8. Disaster Recovery: Develop and maintain backup strategies and disaster recovery plans for critical systems and data. Continuous Learning: Stay current with emerging technologies and industry best practices related to infrastructure engineering, cloud computing, and virtualization. Qualifications: Bachelors degree in Computer Science, Information Technology, or a related field. 5+ years of experience in infrastructure engineering with a focus on Windows and Linux operating systems and cloud platforms (Azure and AWS). Strong proficiency in Windows server administration, Linux system administration, and performance tuning. Hands-on experience with Azure services such as VMs, Azure Networking, Azure Storage, and enterprise app registrations. Familiarity with AWS services and tools, including EC2, S3, RDS, VPC, and CloudFormation. Expertise in VMware vSphere, ESXi, and vCenter. Experience with scripting and automation tools like PowerShell, Python, and Bash. Excellent communication and collaboration skills. Strong problem-solving abilities and adaptability in a fast-paced environment. Certifications (Preferred): Microsoft Certified: Azure Administrator Associate (AZ-104) or equivalent. AWS Certified Solutions Architect - Associate or equivalent. VMware Certified Professional (VCP).

Posted 1 week ago

Apply

3.0 - 8.0 years

6 - 10 Lacs

Bengaluru

Hybrid

Naukri logo

Okta s Workforce Identity Cloud Security Engineering group is looking for an experienced and passionate Senior Site Reliability Engineer to join a team focused on designing and developing Security solutions to harden our cloud infrastructure. We embrace innovation and pave the way to transform bright ideas into excellent security solutions that help run large-scale, critical infrastructure. We encourage you to prescribe defense-in-depth measures, industry security standards and enforce the principle of least privilege to help take our Security posture to the next level. Our Infrastructure Security team has a niche skill-set that balances Security domain expertise with the ability to design, implement, rollout infrastructure across multiple cloud environments without adding friction to product functionality or performance. We are responsible for the ever-growing need to improve our customer safety and privacy by providing security services that are coupled with the core Okta product. This is a high-impact role in a security-centric, fast-paced organization that is poised for massive growth and success. You will act as a liaison between the Security org and the Engineering org to build technical leverage and influence the security roadmap. You will focus on engineering security aspects of the systems used across our services. Join us and be part of a company that is about to change the cloud computing landscape forever. Bring all the passion and dedication along and there s no telling what you could accomplish! You will work on: Building, running, and monitoring Okta's production infrastructure Be an evangelist for security best practices and also lead initiatives/projects to strengthen our security posture for critical infrastructure Responding to production incidents and determining how we can prevent them in the future Triaging and troubleshooting complex production issues to ensure reliability and performance Identifying and automating manual processes Continuously evolving our monitoring tools and platform Promoting and applying best practices for building scalable and reliable services across engineering Developing and maintaining technical documentation, runbooks, and procedures Supporting a 24x7 online environment as part of an on-call rotation You are an ideal candidate if you: Are always willing to go the extra mile: see a problem, fix the problem. Have experience automating, securing, and running large-scale production IAM and containerized services in AWS (EC2, ECS, KMS, Kinesis, RDS), GCP (GKE, GCE) or other cloud providers. Have knowledge of CI/CD principles, Linux fundamentals, OS hardening, networking concepts, and IP protocols. Have an understanding and familiarity with configuration management tools like Chef and Terraform. Have experience in operational tooling languages such as Ruby, Python, Go and shell, and use of source control. Experience with industry-standard security tools like Nessus, Qualys, OSQuery, Splunk, etc. Experience with Public Key Infrastructure (PKI) and secrets management Bonus points for: Experience conducting threat assessments, and assessing vulnerabilities in a high-availability setting. Understand MySQL, including replication and clustering strategies, and are familiar with data stores such as DynamoDB, Redis, and Elasticsearch. Minimum Required Knowledge, Skills, Abilities, and Qualities: 3+ years of experience architecting and running complex AWS or other cloud networking infrastructure resources 3+ years of experience with Chef and Terraform Unflappable troubleshooting skills Strong Linux understanding and experience. Security background and knowledge. BS In computer science (or equivalent experience). This role requires in-person onboarding and travel to our Bengaluru, IN office during the first week of employment."

Posted 1 week ago

Apply

2.0 - 7.0 years

6 - 11 Lacs

Bengaluru

Hybrid

Naukri logo

You will work on: Mentoring, managing, and leading a team of SRE s with a broad range of expertise and experience. Being an evangelist and advocate for security best practices, leading initiatives and projects to strengthen our security posture for our most critical infrastructure. Responding to production incidents, driving us to remediation as quickly as possible and determining how we can prevent them in the future. Triaging and troubleshooting complex production issues to ensure reliability and performance. Working closely with our stakeholders across the organization to ensure our new capabilities are aligned to our competing constraints of reliability, security, and delivery velocity. Partnering directly with recruiting and people ops to hire and retain the best talent in the world. Keep sharp eyes on our metrics, including vulnerability scanning and security posture, cloud spend, RPO and RTO, and toil overhead, and ensure our projects are driving our metrics in the right direction. Supporting a 24x7 online environment as part of an on-call rotation. You are an ideal candidate if you: Are always willing to go the extra mile: see a problem, fix the problem. Are passionate about encouraging the development of engineering peers and leading by example. Have experience managing teams running large-scale production Java/Tomcat and containerized services in AWS (EC2, ECS, KMS, Kinesis, RDS) or other cloud providers. Have deep knowledge of CI/CD principles, Linux fundamentals, OS hardening, networking concepts, and IP protocols. Minimum Required Knowledge, Skills, Abilities, and Qualities: 2+ years of experience managing SRE or SWE teams, ideally in a cloud native environment. Strong leadership, communication, and project management skills. Strong security background and knowledge. BS In computer science (or equivalent experience).

Posted 1 week ago

Apply

9.0 - 10.0 years

11 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 9 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 week ago

Apply

5.0 - 8.0 years

8 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

S&P Dow Jones Indices is seeking a Python/Bigdata developer to be a key player in the implementation and support of data Platforms for S&P Dow Jones Indices. This role requires a seasoned technologist who contributes to application development and maintenance. The candidate should actively evaluate new products and technologies to build solutions that streamline business operations. The candidate must be delivery-focused with solid financial applications experience. The candidate will assist in day-to-day support and operations functions, design, development, and unit testing. Responsibilities and Impact: Lead the design and implementation of EMR Spark workloads using Python, including data access from relational databases and cloud storage technologies. Implement new powerful functionalities using Python, Pyspark, AWS and Delta Lake. Independently come up with optimal designs for the business use cases and implement the same using big data technologies. Enhance existing functionalities in Oracle/Postgres procedures, functions. Performance tuning of existing Spark jobs. Respond to technical queries from operations and product management team. Implement new functionalities in Python, Spark, Hive. Enhance existing functionalities in Postgres procedures, functions. Collaborate with cross-functional teams to support data-driven initiatives. Mentor junior team members and promote best practices. Respond to technical queries from the operations and product management team. What Were Looking For: Basic Required Qualifications: Bachelors degree in computer science, Information Systems, or Engineering, or equivalent work experience. 5 - 8 years of IT experience in application support or development. Hands on development experience on writing effective and scalable Python programs. Deep understanding of OOP concepts and development models in Python. Knowledge of popular Python libraries/ORM libraries and frameworks. Exposure to unit testing frameworks like Pytest. Good understanding of spark architecture as the system involves data intensive operations. Good amount of work experience in spark performance tuning. Experience/exposure in Kafka messaging platform. Experience in Build technology like Maven, Pybuilder. Exposure with AWS offerings such as EC2, RDS, EMR, lambda, S3,Redis. Hands on experience in at least one relational database (Oracle, Sybase, SQL Server, PostgreSQL). Hands on experience in SQL queries and writing stored procedures, functions. A strong willingness to learn new technologies. Excellent communication skills, with strong verbal and writing proficiencies. Additional Preferred Qualifications: Proficiency in building data analytics solutions on AWS Cloud. Experience with microservice and serverless architecture implementation.

Posted 1 week ago

Apply

4.0 - 6.0 years

7 - 9 Lacs

Gurugram

Work from Office

Naukri logo

The Team: Th is team partners with global Datafeed Application Specialists , Product Development, Product Management and Sales Teams to provide quality technical assistance to our external Enterprise Data Delivery Solution c ustomers . This includes client-site environment sizing, product deployment, configuration , and performance tuning. The Impact: You will be able to contribute to customers, the team and company in meaningful way s . You will be integrated into a group with high expectations , focus on quality user experiences and can expect to make a significant impact by demonstrating your technical knowledge and skills . Your position is essential as it is the link between the end-user and our Channels Data Delivery P roducts /Solutions . Combining expertise in our product and content offerings with a deep understanding of our customer profiles and target markets, you will act as a consultant providing the best and most efficient ways for our customers to use our tools to achieve quality results integrating our data into their internal mission-critical operational workflows and data environments. Whats in it for you ? Build a career with a global company Partner with other product specialists and customers as a technical consultant Interact directly with clients and resolve challenging technical issues Learn the global financial markets and related content workflow needs Grow and improve your skills by working on enterprise level products and new technologies Experience different challenges on a daily basis Responsibilities: Be a technical product specialist for our Channels Data Delivery solutions and provide in-depth technical assistance to clients Deploy/Support our Xpressfeed Loader application on our clients enterprise level environments Assist in solution sizing, deployments, configurations and troubleshooting on-the-go Gain in-depth knowledge of products from a content and delivery standpoint, and assist with pre and post Sales opportunities Bring new ideas for innovation and automation excellence into the technical support process and build tools that will help replicate, troubleshoot, and resolve client issues Research and gain knowledge on the rapidly evolving product offerings Assess the technical dynamics within client environments that might impact productperformance and recommend changes for improvement Involved in product strategy, roadmaps, anduser-acceptance testing Client-side solution deployments, implementations, upgrades, and migrations Be the voice of the customers for product enhancements What Were Looking For: Core Qualifications Degree in Computer Science, Information Systems, or equivalent experience Excellent communication skills (verbal, written and presentation) Demonstrated technical troubleshooting and problem-solving skills, resourcefulness, attention to detail, quality, and follow-through Technical product support or technical sales experience Must be confident and energized leading client-facing technical discussions Experience with software deployments & configuration Windows and Linux operating system knowledge SQL query skills Experience with cloud environments (AWS, MS Azure and/or Google Cloud) Solid knowledge of AWS (EC2, RDS/Aurora) environments Sizing, standing-up and deploying software and databases in AWS a PLUS Self-starter with ability to keep pace with rapid changes in evolving technologies Highly Desired Knowledge, Skills & Experience Finance related experience or degree Capital markets (financial, credit, trading, benchmarks/indexes, industries, etc.) SQL Server, Oracle, PostgreSQL and/or MySQL a PLUS Windows services and/or Linux daemon processes Windows system administration/security Linux administration/security Database administration FTP Protocols (FTP, SFTP, FTPS, etc.) & features/commands Proxies (HTTP/FTP) Network & Security Protocols Open source evolving databases Product strategy and product development lifecycles (agile/scrum) Dataintegration concepts & techniques Data modeling, big data & alternative/unstructureddata

Posted 1 week ago

Apply

5.0 - 9.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

About the Role: Grade Level (for internal use): 11 The Role: Lead Software Engineering The Team: Our team is responsible for the architecture, design, development, and maintenance of technology solutions to support the Sustainability business unit within Market Intelligence and other divisions. Our program is built on a foundation of inclusivity, enablement, and adaptability and respect which fosters an environment of open-communication and trust. We take pride in each team members accountability and responsibility to move us forward in our strategic initiatives. Our work is collaborative, we work transparently with others within our business unit and others across the entire organization. The Impact: As a Lead, Cloud Engineering at S&P Global, you will be instrumental in streamlining the software development and deployment of our applications to meet the needs of our business. Your work ensures seamless integration and continuous delivery, enhancing the platform's operational capabilities to support our business units. You will collaborate with software engineers and data architects to automate processes, improve system reliability, and implement monitoring solutions. Your contributions will be vital in maintaining high availability security and performance standards, ultimately leading to the delivery of impactful, data-driven solutions. Whats in it for you: Career Development: Build a meaningful career with a leading global company at the forefront of technology. Dynamic Work Environment: Work in an environment that is dynamic and forward-thinking, directly contributing to innovative solutions. Skill Enhancement: Enhance your software development skills on an enterprise-level platform. Versatile Experience: Gain full-stack experience and exposure to cloud technologies. Leadership Opportunities: Mentor peers and influence the products future as part of a skilled team. Key Responsibilities: Design and develop scalable cloud applications using various cloud services. Collaborate with cross-functional teams to define, design, and deliver new features. Implement cloud security best practices and ensure compliance with industry standards. Monitor and optimize application performance and reliability in the cloud environment. Troubleshoot and resolve issues related to our applications and services. Stay updated with the latest cloud technologies and trends. Manage our cloud instances and their lifecycle, to guarantee a high degree of reliability, security, scalability, and confidence at any given time. Design and implement CI/CD pipelines to automate software delivery and infrastructure changes. Collaborate with development and operations teams to improve collaboration and productivity. Manage and optimize cloud infrastructure and services. Implement configuration management tools and practices. Ensure security best practices are followed in the deployment process. What Were Looking For: Bachelor's degree in Computer Science or a related field. Minimum of 10+ years of experience in a cloud engineering or related role. Proven experience in cloud development and deployment. Proven experience in agile and project management. Expertise with cloud services (AWS, Azure, Google Cloud). Experience in EMR, EKS, Glue, Terraform, Cloud security, Proficiency in programming languages such as Python, Java, Scala, Spark Strong Implementation experience in AWS services (e.g. EC2, ECS, ELB, RDS, EFS, EBS, VPC, IAM, CloudFront, CloudWatch, Lambda, S3. Proficiency in scripting languages such as Bash, Python, or PowerShell. Experience with CI/CD tools like Azure CI/CD. Experience in SQL and MS SQL Server. Knowledge of containerization technologies like Docker, Kubernetes. Nice to have - Knowledge of GitHub Actions, Redshift and machine learning frameworks Excellent problem-solving and communication skills. Ability to quickly, efficiently, and effectively define and prototype solutions with continual iteration within aggressive product deadlines. Demonstrate strong communication and documentation skills for both technical and non-technical audiences.

Posted 1 week ago

Apply

8.0 - 10.0 years

11 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 8 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 week ago

Apply

10.0 - 17.0 years

30 - 45 Lacs

Bengaluru

Work from Office

Naukri logo

Lead the end-to-end software development lifecycle for custom application projects, ensuring high-quality delivery& alignment with business requirements Design,develop,implement system integrations to streamline business processes, Mentor&guide teams Required Candidate profile Exp working with technologies in large-scale,multiplatform systems environment Team lead exp defining/delivering enterprise solutions leading custom application development projects,SDLC&Agile method

Posted 1 week ago

Apply

12.0 - 15.0 years

40 - 60 Lacs

Hyderabad

Work from Office

Naukri logo

Strong in JavaScript Frameworks Strong in HLDs & LLDs Strong into System design database Schemas Excellent in Coding Convention & Quality Standards Exp. in 3rd party skills (REST APIs, SOAP APIs, XML, JSON SaaS Application, AWS with S3, EKS, RDS, EC2 Required Candidate profile Building Application (Profilers, APM tools, Security Scanning Tools Exp in CI/CD tooling Competency frontend framework/Library ie React, Angular or NodeJS Developing Production Code TypeScript & React

Posted 1 week ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Role Overview: Develop, and execute integration tests to ensure seamless interaction between various components of our cybersecurity solutions. Develop, and execute manual and automated security test plans for our cybersecurity solutions. Utilize strong AWS knowledge to manage and test applications in cloud environments, ensuring high availability and security. Apply deep networking skills to validate network configurations, security protocols, and data integrity. Develop and execute comprehensive API test plans to verify the functionality, reliability, and performance of our API endpoints. Design and implement robust automation frameworks to streamline testing processes and improve efficiency. Identify, document, and track software defects, ensuring timely resolution and quality improvements. Work closely with developers, product managers, and other QA team members to understand requirements, design test plans, and deliver high-quality products. Stay updated with the latest industry trends, tools, and best practices to continuously improve testing methodologies. Minimum of 4+years of experience in QA engineering, with a focus on integration testing and automation. Proven experience working with AWS services, including EC2, S3, RDS, and Lambda. Good understanding of network protocols, firewalls, VPNs, and security configurations. Hands-on experience with API testing tools such as Postman, SoapUI, or similar. Good knowledge of integration testing principles and methodologies Proficiency in automation tools and frameworks such as Selenium, JUnit, TestNG, or similar. Familiarity with DevOps practices and CI/CD pipelines. Knowledge of containerization and orchestration tools like Docker and Kubernetes. Good programming skills in languages such as Java, Python, or similar. Excellent analytical and problem-solving skills. Strong verbal and written communication skills, with the ability to clearly articulate technical concepts. Bachelors degree in computer science, Information Technology, or a related field. Advanced certifications in AWS or networking are a plus.

Posted 1 week ago

Apply

10.0 - 15.0 years

35 - 40 Lacs

Chennai

Work from Office

Naukri logo

We seek an experienced and dynamic DevOps Architect to lead technical initiatives and manage a high performing DevOps team in an IT services environment. This role combines technical expertise in AWS and DevOps practices with strong leadership and team management skills to manage the DevOps Team. Primary Skills: AWS Services: Expertise in AWS services such as EC2, S3, RDS, Lambda, and Kubernetes (EKS). DevOps Practices: Strong experience in implementing CI/CD pipelines, infrastructure automation, and tools like Jenkins, Terraform, and AWS CodePipeline. Cloud Architecture: Experience in designing secure, scalable, and cost-effective cloud infrastructure. Team Leadership: Proven ability to manage and mentor a team of DevOps engineers, track progress, and ensure timely delivery. Monitoring & Troubleshooting: Expertise in AWS CloudWatch, CloudTrail, and performance optimization techniques. Roles and Responsibilities - Cloud Infrastructure Design and Implementation: - Architect and oversee the deployment of secure, scalable, and cost-effective AWS cloud solutions using services like EC2, S3, RDS, Lambda, and Kubernetes (EKS). - Ensure AWS cloud architectures align with industry best practices, security standards, and client requirements. DevOps Strategy and Execution: - Work with the client team to define and implement DevOps strategies, including CI/CD pipeline creation, automated testing, and infrastructure automation using tools like Jenkins, Terraform, and AWS CodePipeline. - Optimize existing DevOps workflows to improve efficiency, reliability, and scalability. - Understand the client-side technical environment and projects and propose solutions and recommendations. Team Management and Leadership: - Manage and mentor a team of DevOps engineers, ensuring skill development and alignment with project goals. - Assign tasks, track progress, and ensure timely delivery of client deliverables. - Foster a collaborative, innovative, and results-driven team culture. Client Engagement and Stakeholder Collaboration: - Act as a key point of contact for clients, understanding their needs and translating business requirements into technical solutions. - Collaborate with cross-functional teams, including development, QA, and operations, to ensure seamless project execution. Monitoring, Troubleshooting, and Performance Optimization: - Implement monitoring, logging, and alerting systems using AWS CloudWatch, CloudTrail, and other tools. - Proactively identify and resolve performance bottlenecks and system issues to ensure high availability and reliability. Process Improvement and Innovation: - Continuously evaluate and recommend new tools, technologies, and practices to enhance team performance and service quality. - Drive the adoption of emerging trends such as DevSecOps and Infrastructure as Code (IaC)."

Posted 1 week ago

Apply

10.0 - 15.0 years

35 - 40 Lacs

Pune

Work from Office

Naukri logo

The Impact of a Lead Software Engineer - Data to Coupa: The Lead Software Engineer - Data is a pivotal role at Coupa, responsible for leading the architecture, design, and optimization of the data infrastructure that powers our business. This individual will collaborate with cross-functional teams, including Data Scientists, Product Managers, and Software Engineers, to build and maintain scalable, high-performance data solutions. The Lead Software Engineer - Data will drive the development of robust data architectures, capable of handling large and complex datasets, while ensuring data integrity, security, and governance. Additionally, this role will provide technical leadership, mentoring engineers, and defining best practices to ensure the efficiency and scalability of our data systems. Suitable candidates will have a strong background in data engineering, with experience in data modeling, ETL development, and data pipeline optimization. They will also have deep expertise in programming languages such as Python, Java, or Scala, along with hands-on experience in cloud-based data storage and processing technologies such as AWS, Azure, or GCP. The impact of a skilled Lead Software Engineer - Data at Coupa will be significant, ensuring that our platform is powered by scalable, reliable, and high-quality data solutions. This role will enable the company to deliver innovative, data-driven solutions to our customers and partners. Their work will contribute to the overall success and growth of Coupa, solidifying its position as a leader in cloud-based spend management solutions. What You ll Do: Lead and drive the development and optimization of scalable data architectures and pipelines. Design and implement best-in-class ETL/ELT solutions for real-time and batch data processing. Optimize Spark clusters for performance, reliability, and cost efficiency, implementing monitoring solutions to identify bottlenecks. Architect and maintain cloud-based data infrastructure leveraging AWS, Azure, or GCP services. Ensure data security and governance, enforcing compliance with industry standards and regulations. Develop and promote best practices for data modeling, processing, and analytics. Mentor and guide a team of data engineers, fostering a culture of innovation and technical excellence. Collaborate with stakeholders, including Product, Engineering, and Data Science teams, to support data-driven decision-making. Automate and streamline data ingestion, transformation, and analytics processes to enhance efficiency. Develop real-time and batch data processing solutions, integrating structured and unstructured data sources. What you will bring to Coupa: Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Expertise in processing large workloads and complex code on Spark clusters. Expertise in setting up monitoring for Spark clusters and driving optimization based on insights and findings. Experience in designing and implementing scalable Data Warehouse solutions to support analytical and reporting needs. Experience with API development and design with REST or GraphQL. Experience building and optimizing big data data pipelines, architectures, and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency, and workload management. Working knowledge of message queuing, stream processing, and highly scalable big data data stores. Strong project management and organizational skills. Experience supporting and working with cross-functional teams in a dynamic environment. We are looking for a candidate with 10+ years of experience in a in Data Engineering with at least 3+ years in a Technical Lead role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field. They should also have experience using the following software/tools: Experience with object-oriented/object function scripting languages: Python, Java, C++, .net, etc. Expertise in Python is a must. Experience with big data tools: Spark, Kafka, etc. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc. Experience with AWS cloud services: EC2, EMR, RDS, Redshift. Working knowledge of stream-processing systems: Storm, Spark-Streaming, etc.

Posted 1 week ago

Apply

7.0 - 10.0 years

11 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 week ago

Apply

8.0 - 13.0 years

20 - 30 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

Who we are: Acqueon's conversational engagement software lets customer-centric brands orchestrate campaigns and proactively engage with consumers using voice, messaging, and email channels. Acqueon leverages a rich data platform, statistical and predictive models, and intelligent workflows to let enterprises maximize the potential of every customer conversation. Acqueon is trusted by 200 clients across industries to increase sales, drive proactive service, improve collections, and develop loyalty. At our core, Acqueon is a customer-centric company with a burning desire (backed by a suite of awesome, AI-powered technology) to help businesses provide friction-free, delightful, and referral-worthy customer experiences. As a DBA in Acqueon you will. 5-8 years of experience with SQL Server Administration experience required 5-8 years of experience with backups, restores and recovery models 5-8 years of experience of High Availability (HA) and Disaster Recovery (DR) options for SQL Server 5-8 years of experience with Windows server, including Active Directory Knowledge/Experience of AWS RDS is an advantage Should have strong troubleshooting skills Should have knowledge on capacity plannings Knowledge/Experience in AWS CloudWatch to setup/monitor alerts for AWS RDS Responsibilities: Responsible for all operational aspects of the database administration including meeting standards/best practices, technical integration, and addressing incidents on a daily basis Assist in migration/upgrade of databases to AWS projects. Responsibility also includes handling Amazon RDS for SQL server database Work cooperatively with other engineering resources to maintain and revise technical documentation. Should be able to manage and perform standard maintenance, monitoring, and tuning of numerous SQL production database instances. Has worked and knowledge in 2016 and 2019 Install and configure SQL 2016 and 2019 Tune and troubleshoot SQL 2016, 2019 database issues Set up backup and recovery policies and jobs. Use appropriate tuning methodology to resolve database performance issues. Support 2012/2014/2016/2019 installations and upgrades. Flexible, adaptable and able to manage multiple tasks in a dynamic environment. Communicate complex technical concepts clearly to peers and management. Work closely with various Infrastructure and Support teams across global. Present DDL/DML changes successfully to high concurrency databases while preserving uptime and efficiency. Knowledge on windows server environment. Strong knowledge of database performance tuning and troubleshooting. In-depth knowledge of database internals and data structures. Excellent verbal and written communication skills. Should be flexible to work in 24x7 Rotational shift This is an excellent opportunity for those seeking to continue to build upon their existing skills. The right individual will be self-motivated and a creative problem solver. You should possess the ability to seek out the correct information efficiently through individual efforts and with the team. By joining the Acqueon team, you can enjoy the benefits of working for one of the industrys fastest growing and highly respected technology companies. If you, or someone you know, would be a great fit for us we would love to hear from you today! Use the form to apply today or submit your resume.

Posted 1 week ago

Apply

6.0 - 9.0 years

11 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 9+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title: Cloud Data Warehouse Administrator (DBA) AWS Redshift | Titan Company Limited Company: Titan Company Limited Location: Corporate Office Bengaluru Experience: 3+ years Education: BE / MCA / MSc-IT (from reputed institutions) Job Description Titan Company Limited is looking for a Cloud Data Warehouse Administrator (DBA) to join our growing Digital team in Bengaluru. The ideal candidate will have strong expertise in AWS-based data warehouse solutions with hands-on experience in Redshift (mandatory), RDS, S3, and DynamoDB , along with an eye for performance, scalability, and cost optimization. Key Responsibilities Administer and manage AWS data environments: Redshift, RDS, DynamoDB, S3 Monitor system performance and troubleshoot data-related issues Ensure availability, backup, disaster recovery, and security of databases Design and implement cost-optimized, high-availability solutions Maintain operational documentation and SOPs for all DBA tasks Collaborate with internal and external teams for issue resolution and enhancements Maintain data-level security (row/column level, encryption, masking) Analyze performance and implement improvements proactively Required Skills and Experience 4+ years of experience in a DBA role; 2+ years on AWS cloud (Redshift, RDS, Aurora) Experience in managing cloud database architectures end-to-end Expertise in database performance tuning, replication, and DR strategies Familiarity with Agile working environments and cross-functional collaboration Excellent communication and documentation skills Preferred: AWS/DBA certifications About Titan Company Limited Titan Company Limited, a part of the Tata Group, is one of Indias most admired lifestyle companies. With a strong portfolio in watches, eyewear, jewelry, and accessories, Titan is committed to innovation, quality, and cutting-edge technology through its Digital initiatives. Interested Candidates Kindly share your details on amruthaj@titan.co.in

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies