Home
Jobs
Companies
Resume

202 Rds Jobs - Page 5

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2 - 6 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

The core infrastructure team is responsible for this infrastructure, spread across 10 production deployments across the globe, 24/7, with 4 nines of uptime. Our infrastructure is managed using Terraform (for IaC), GitLab CI and monitored using Prometheus and Datadog. We're looking for you if: You are strong infrastructure engineer with specialty in networking and site reliability. You have strong networking fundamentals (DNS, subnets, VPN, VPCs, security groups, NATs, Transit Gateway etc) You have extensive and deep experience (~4 years) with IaaS Cloud Providers. AWS is ideal, but GCP/Azure would be fine too. You have experience with running cloud orchestration technologies like Kubernetes and/or Cloud Foundry, and designing highly resilient architectures for these. You have strong knowledge of Unix/Linux fundamentals You have experience with infrastructure as code tools. Ideally Terraform, OpenTofu but CloudFormation or Pulumi are fine too. You have experience designing cross Cloud/on-prem connectivity and observability You have a DevOps mindset: you build it, you run it. You care about code quality, and know how to lead by example: from a clean Git history, to well thought-out unit and integration tests. Even better (but not essential!) if you have experience with: Monitoring tools that we use, such as Datadog and Prometheus CI/CD tooling such as GitLab CI You have programming experience with (ideally) Golang or Python You are willing and able to use your technical expertise to mentor, train, and lead other engineers Youll help drive digital innovation by: Continually improving our security + operational excellence. Work directly with customers to set up connectivity between Mendix Cloud platform and customers backend infrastructure. Rapidly scaling our infrastructure to match our rapidly increasing customer base. Continuously improving the observability of our platform, so that we can fix problems before they occur. Improving our automation and surrounding tooling to further streamline deployments + platform upgrade. Improving the way we use AWS resources, and defining cost optimization strategies Here are many of the tools we make use of: Amazon Web Services (EC2, Fargate, RDS, S3, ELB, VPC, CloudWatch, Lambda, IAM, and more !) PaaS: (Open Source) Kubernetes, Docker, Open Service Broker API Eventing: AWS MSK and Confluent Warpstream BYOK Monitoring: Prometheus, InfluxDB, Grafana, Datadog CI/CD: GitLab CI, ArgoCD Automation: Terraform, Helm Programming languages: mostly Golang and Python, with a sprinkling of Ruby and Lua Scripting: Bash, Python Version Control: Git + GitLab Database: PostgreSQL

Posted 2 months ago

Apply

4 - 6 years

6 - 8 Lacs

Chennai

Work from Office

Naukri logo

Responsibilities: Building software to help DevOps - You oversee the building and implementation of tools and services to help Engineering do better with agile development and delivery and drive deeper reliability to our systems in production. Able to work on the production system that is catering the needs of the large customer base on rotational shift basis (24X5). Add automation and context to alerts leading to better real-time collaborative response from technical responders. Additionally, can update runbooks, tools and documentation to help prepare Engineering for future incidents. Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage Encouraging and building automated processes wherever possible Understanding customer requirements and project KPIs Setting up tools and required infrastructure Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management Incidence management and root cause analysis Coordination and communication within the team and with customers Selecting and deploying appropriate CI/CD tools Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline Mentoring and guiding the team members Monitoring and measuring customer experience and KPIs Managing periodic reporting on the progress to the management and other stakeholders. Candidate Profile: Possess 4+ years of experience in a DevOps position. Hands-on public Cloud experience - AWS, Google Cloud, and/or Azure (multi-cloud preferred). Prefer industry certification (e.g. AWS Certified DevOps / Solutions Architect) Must have strong experience building & deploying services on Kubernetes ideally CKA certified. Strong working knowledge of CI/CD Pipelines in a globally distributed environment. Experience with Automating Testing and deployment pipelines using DevOps tools such as Jenkins and GitLab. Working knowledge of Terraform and Ansible for performing DevOps on cloud platforms Knowledge of Corporate Governance on Cloud service usage and Security measures. Knowledge of Monitoring and Observability Tools: DataDog, Grafana, Prometheus, Geneos, Jaeger, OpenTelemetry, Zipkin, Splunk Experience in configuring and securing databases such as RDS, PostgreSQL, Cassandra, Redis. Experience in streaming technologies such as Kafka www.enlightedinc.com Page 3 of 3 Experience in networking and security concepts such as DNS, HTTP, HTTPS, SSL/TLS as well as practical knowledge such as reverse proxy, load balancer, and firewall configurations. Experience with Production Support who can be on call and help troubleshoot errors on the applications and fix the issues

Posted 2 months ago

Apply

2 - 3 years

5 - 10 Lacs

Pune

Work from Office

Naukri logo

** Must Have: Must have good knowledge in below services: I. AWS Core Services: EC2, VPC, S3, RDS. II. Cloud Security: IAM III. Networking: VPC configuration and management IV. Monitoring: Cloud Watch for monitoring and alerting Linux: Patch Management, User Management, File System Management, Backup and restore. Should be able to follow the SOPs and Complete the allocated Task. Knowledge of database administration (e.g., MySQL) and web servers (e.g., Apache, Nginx). ** Need to Have: Proficiency in scripting languages (e.g., Shell, Python) for automation and configuration management. Excellent communication skills and ability to work effectively in a team environment. ** Preferred: DevOps tools terraform, Docker, CI/CD, Kubernetes. Create and maintain technical documentation, including system configurations, procedures, and troubleshooting guides.

Posted 2 months ago

Apply

4 - 6 years

6 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for Site Reliability Engineer! Youll make a difference by: SRE L1 Commander is responsible for ensuring the stability, availability, and performance of critical systems and services. As the first line of defense in incident management and monitoring, the role requires real-time response, proactive problem solving, and strong coordination skills to address production issues efficiently. Monitoring and Alerting: Proactively monitor system health, performance, and uptime using monitoring tools like Datadog, Prometheus. Serving as the primary responder for incidents to troubleshoot and resolve issues quickly, ensuring minimal impact on end-users. Accurately categorizing incidents, prioritize them based on severity, and escalate to L2/L3 teams when necessary. Ensuring systems meet Service Level Objectives (SLOs) and maintain uptime as per SLAs. Collaborating with DevOps and L2 teams to automate manual processes for incident response and operational tasks. Performing root cause analysis (RCA) of incidents using log aggregators and observability tools to identify patterns and recurring issues. Following predefined runbooks/playbooks to resolve known issues and document fixes for new problems. Youd describe yourself as: Experienced professional with 4 to 6 years of relevant experience in SRE, DevOps, or Production Support with monitoring tools (e.g., Prometheus, Datadog). Working knowledge of Linux/Unix operating systems and basic scripting skills (Python, Gitlab actions) cloud platforms (AWS, Azure, or GCP). Familiarity with container orchestration (Kubernetes, Docker, Helmcharts) and CI/CD pipelines. Exposure with ArgoCD for implementing GitOps workflows and automated deployments for containerized applications. Possessing experience in Monitoring: Datadog, Infrastructure: AWS EC2, Lambda, ECS/EKS, RDS, Networking: VPC, Route 53, ELB and Storage: S3, EFS, Glacier. Strong troubleshooting and analytical skills to resolve production incidents effectively. Basic understanding of networking concepts (DNS, Load Balancers, Firewalls). Good communication and interpersonal skills for incident communication and escalation. Having preferred certifications: AWS Certified SysOps Administrator Associate, AWS Certified Solutions Architect Associate or AWS Certified DevOps Engineer Professional

Posted 2 months ago

Apply

8 - 13 years

5 - 15 Lacs

Chennai, Bengaluru, Hyderabad

Work from Office

Naukri logo

Hi All, Greetings from Radiant Systems!!!! We have an excellent opportunity with one of our client. Job description: Python developer + AWS developer (Contract to hire position) 1. AWS Cloud services with Python and its frameworks such as Django on the backend 2. Cloud - AWS such as Lambda, DynamoDB, RDS, AppSync. 3. Experience working with RESTful APIs and/or GraphQl 4. Good understanding of development best practices such as pair programming, TDD 5. Work in an agile Mandatory skills* AWS Cloud services with Python,AWS Lambda, DynamoDB, RDS, AppSync,RESTful APIs Desired skills* CI/CD pipelines in GitLab Cloud. Location: PAN India Interested candidates kindly send your CV to ktsushma@radiants.com

Posted 2 months ago

Apply

8 - 13 years

15 - 22 Lacs

Bengaluru

Hybrid

Naukri logo

AWS Cloud services with Python and its frameworks such as Django on the backend 2. Cloud - AWS such as Lambda, DynamoDB, RDS, AppSync. 3. Experience working with RESTful APIs and/or GraphQl 4. Good understanding of development best practices such as pair programming, TDD 5. Work in an agile

Posted 2 months ago

Apply

5 - 8 years

15 - 20 Lacs

Gurgaon

Remote

Naukri logo

We are seeking a highly skilled AWS Solution Architect to join our team. The ideal candidate will have deep expertise in AWS architecture, cloud infrastructure design, and solution implementation. In this role, you will be responsible for designing, implementing, and optimizing AWS-based solutions to support business requirements while ensuring scalability, security, and cost-effectiveness. Key Responsibilities: Design and implement scalable, secure, and high-performing AWS architectures based on business and technical requirements. Develop detailed solution architectures, including diagrams, documentation, and technical specifications. Ensure best practices in security, cost optimization, and performance while deploying AWS services. Lead migrations, modernizations, and cloud-native transformations for applications and workloads. Optimize cloud infrastructure for high availability, fault tolerance, and disaster recovery. Implement monitoring, logging, and automation strategies to enhance operational efficiency. Work closely with engineering, DevOps, and IT teams to ensure seamless integration and deployment of AWS solutions. Provide technical guidance and mentorship to teams on AWS best practices and emerging technologies. Troubleshoot and resolve AWS-related technical challenges, ensuring smooth operations. Implement AWS security best practices, including IAM, encryption, and network security. Ensure compliance with industry regulations and cloud governance policies. Stay updated with new AWS services, industry trends, and best practices. Evaluate and recommend emerging cloud technologies to enhance business processes. Qualifications & Skills: Bachelors degree in Computer Science, Information Technology, or a related field. Proven experience as an AWS Solution Architect or in a similar cloud architecture role. Expertise in AWS services, including EC2, S3, RDS, Lambda, VPC, IAM, CloudFormation, and more. Experience with microservices architecture, containerization (Docker, Kubernetes), and serverless computing. Strong knowledge of networking, security, and infrastructure automation. Hands-on experience with infrastructure as code (IaC) tools like Terraform or CloudFormation. Excellent problem-solving skills and ability to design resilient architectures. AWS Certified Solutions Architect Associate or Professional (preferred).

Posted 2 months ago

Apply

2 - 3 years

4 - 5 Lacs

Gurgaon, Noida

Work from Office

Naukri logo

About the Role: Grade Level (for internal use): 10 The Role: Cloud DevOps Engineer The Impact: This role is crucial to the business as it directly contributes to the development and maintenance of cloud-based DevOps solutions on the AWS platform. Whats in it for you: Drive Innovation : Join a dynamic and forward-thinking organization at the forefront of the automotive industry. Contribute to shaping our cloud infrastructure and drive innovation in cloud-based solutions on the AWS platform. Technical Growth : Gain valuable experience and enhance your skills by working with a team of talented cloud engineers. Take on challenging projects and collaborate with cross-functional teams to define and implement cloud infrastructure strategies. Impactful Solutions : Contribute to the development of solutions that directly impact the scalability, reliability, and security of our cloud infrastructure. Play a key role in delivering high-quality products and services to our clients. We are seeking a highly skilled and driven Cloud DevOps Engineer to join our team. Candidate should have experience with developing and deploying native cloud-based solutions, possess a passion for container-based technologies, immutable infrastructure, and continuous delivery practices in deploying global software. Responsibilities: Deploy scalable, highly available, secure, and fault tolerant systems on AWS for the development and test lifecycle of AWS Cloud Native solutions Configure and manage AWS environment for usage with web applications Engage with development teams to document and implement best practice (low maintenance) cloud-native solutions for new products Focus on building Dockerized application components and integrating with AWS ECS Contribute to application design and architecture, especially as it relates to AWS services Manage AWS security groups Collaborate closely with the Technical Architects by providing input into the overall solution architecture Implement DevOps technologies and processes i.e., containerization, CI/CD, infrastructure as code, metrics, monitoring etc. Experience of networks, security, load balancers, DNS and other infrastructure components and their application to cloud (AWS) environments Passion for solving challenging issues Promote cooperation and commitment within a team to achieve common goals What you will need: Understanding of networking, infrastructure, and applications from a DevOps perspective Infrastructure as code ( IaC ) using Terraform and CloudFormation Deep knowledge of AWS especially with services like ECS/Fargate, ECR, S3/CloudFront, Load Balancing, Lambda, VPC, Route 53, RDS, CloudWatch, EC2 and AWS Security Center Experience managing AWS security groups Experience building scalable infrastructure in AWS Experience with one or more AWS SDKs and/or CLI Experience in Automation, CI/CD pipelines, DevOps principles Experience with Docker containers Experience with operational tools and ability to apply best practices for infrastructure and software deployment Software design fundamentals in data structures, algorithm design and performance analysis Experience working in an Agile Development environment Strong written and verbal communication and presentation skills Education and Experience: Bachelor's degree in Computer Science, Information Systems, Information Technology, or a similar major or CertifiedDevelopment Program 2-3 years of experience managing AWS application environment and deployments 5+ years of experience working in a development organization

Posted 2 months ago

Apply

3 - 5 years

5 - 7 Lacs

Pune

Work from Office

Naukri logo

We are looking for DevOps Engineer How do you craft the future Smart Buildings? Were looking for the makers of tomorrow, the hardworking individuals ready to help Siemens transform entire industries, cities and even countries. Get to know us from the inside, develop your skills on the job. Youll make a difference by Designing, deploying, and managing AWS cloud infrastructure, including compute, storage, networking, and security services. Implementing and maintaining CI/CD pipelines using tools like GitLab CI, Jenkins, or similar technologies to automate build, test, and deployment processes. Collaborating with development teams to streamline development workflows and improve release cycles. Monitor and troubleshoot infrastructure and application issues, ensuring high availability and performance. Implementing infrastructure as code (IaC) using tools like Terraform or CloudFormation to automate provisioning and configuration management. Maintaining version control systems and Git repositories for codebase management and collaboration. Implementing and enforce security best practices and compliance standards in cloud environments. Continuously evaluate and embrace new technologies, tools, and best practices to improve efficiency and reliability. There are a lot of learning opportunities for our new team member. An openness to learn more about data analytics (including AI) offerings is part of your motivation Your defining qualities A University degree in Computer Science or a comparable education, we are flexible if a high quality of code is ensured. Proven experience (3-5 years) with common DevOps practices such as CI/CD pipelines (GitLab), Container and orchestration (Docker, ECS, EKS, Helm) and infrastructure as code (Terraform) Working knowledge of TypeScript, JavaScript, and Node.js. Good exposure to AWS cloud Thriving in working independently, i.e., can break down high-level objectives into concrete key results and implement those. Able to work with AWS from day one, familiarity with AWS services beyond EC2 (e.g., Fargate, RDS, IAM, Lambda) is something we expect from applicants. Having good knowledge of configuring logging and monitoring infrastructure with ELK, Prometheus, CloudWatch, Grafana. When it comes to methodologies, having knowledge of agile software development processes would be highly valued. Having the right demeanor, allowing you to navigate within a complex global organization and getting things done. We need a person with an absolute willingness to support the team, a proactive and stress-resistant personality. Business fluency in English

Posted 2 months ago

Apply

8 - 13 years

25 - 40 Lacs

Pune, Delhi NCR

Hybrid

Naukri logo

Role: Lead Data Engineer Experience: 8-12 years Must-Have: 8+ years of relevant experienceinData Engineeringand delivery. 8+ years of relevant work experience in Big Data Concepts. Worked on cloud implementations. Have experience in Snowflake, SQL, AWS (glue, EMR, S3, Aurora, RDS, AWS architecture) Good experience withAWS cloudand microservices AWS glue, S3, Python, and Pyspark. Good aptitude, strong problem-solving abilities, analytical skills, and ability to take ownership asappropriate. Should be able to do coding, debugging, performance tuning, and deploying the apps to the Production environment. Experience working in Agile Methodology Candidates having experience in Requirement Gathering, Analysis, Gap Analysis, Team Leading & Ownership & Client communication would be preferred Ability to learn and help the team learn new technologiesquickly. Excellentcommunication and coordination skills Good to have: Have experience in DevOps tools (Jenkins, GIT etc.) and practices, continuous integration, and delivery (CI/CD) pipelines. Spark, Python, SQL (Exposure to Snowflake), Big Data Concepts, AWS Glue. Worked on cloud implementations (migration, development, etc. Role & Responsibilities: Be accountable for the delivery of the project within the defined timelines with good quality. Working with the clients and Offshore leads to understanding requirements, coming up with high-level designs, and completingdevelopment,and unit testing activities. Keep all the stakeholders updated about the task status/risks/issues if there are any. Keep all the stakeholders updated about the project status/risks/issues if there are any. Work closely with the management wherever and whenever required, to ensure smooth execution and delivery of the project. Guide the team technically and give the team directions on how to plan, design, implement, and deliver the projects. Education: BE/B.Tech from a reputed institute.

Posted 2 months ago

Apply

5 - 7 years

8 - 10 Lacs

Noida

Work from Office

Naukri logo

What you need BS in an Engineering or Science discipline, or equivalent experience 5+ years of software/data engineering experience using Java, Scala, and/or Python, with at least 3 years experience in a data and BI focused role Experience in data integration (ETL/ELT) development using multiple languages (e.g., Python, PySpark, SparkSQL) and data transformation (e.g., dbt) Experience building data pipelines supporting a variety of integration and information delivery methods as well as data modelling techniques and analytics Knowledge and experience with various relational databases and demonstrable proficiency in SQL and data analysis requiring complex queries, and optimization Experience with AWS-based data services technologies (e.g., Glue, RDS, Athena, etc.) and Snowflake CDW, as well as BI tools (e.g., PowerBI) Willingness to experiment and learn new approaches and technology applications Knowledge of software engineering and agile development best practices Excellent written and verbal communication skills

Posted 2 months ago

Apply

5 - 10 years

10 - 15 Lacs

Noida

Work from Office

Naukri logo

Key Responsibilities: Work with a fantastic group of people in a supportive environment where training, learning and growth are embraced Design, develop, and maintain high-quality software applications in C++ and .NET. Implement and manage AWS cloud technologies, focusing on commonly used services such as EC2, S3, Lambda, and RDS. Enhance application security measures and implement best practices to safeguard sensitive data. Collaborate with cross-functional teams to gather requirements, design solutions, and deliver high-quality software on time. Take ownership of projects, from concept to delivery, ensuring adherence to project timelines and objectives. Translate financial requirements into technical solutions, demonstrating a strong understanding of financial terms and processes. Work as part of an agile team to identify and deliver solutions to prioritized requirements Troubleshoot and debug applications to enhance performance and usability. Support and enhance existing solutions, ensuring they meet business requirements. Required Qualifications & Experience: Bachelor's degree in Computer Science, Engineering, or related field. 5+ years of hands-on experience in C++, .NET software development. Proficiency in SQL Server, PostgreSQL & GIT. Familiarity with T-bricks solutions by Broadridge. Cloud Technologies: AWS (S3, Lambda, SNS, SQS, RDS) Hands-on experience with cloud platforms, particularly AWS. Strong understanding of agile methodologies and experience working in agile environments. Excellent problem-solving skills and the ability to work independently or as part of a team. Exceptional communication skills, with the ability to articulate technical concepts to non-technical stakeholders. Experience with UI framework like Angular is a plus.

Posted 2 months ago

Apply

5 - 10 years

8 - 13 Lacs

Gurgaon, Hyderabad

Work from Office

Naukri logo

Responsibilities: Design and implement cloud solutions using AWS and Azure. Develop and maintain Infrastructure as Code (IAC) with Terraform. Create and manage CI/CD pipelines using GitHub Actions and Azure DevOps. Automate deployment processes and provisioning of compute instances and storage. Orchestrate container deployments with Kubernetes. Develop automation scripts in Python, PowerShell, and Bash. Monitor and optimize cloud resources for performance and cost-efficiency using tools like Datadog and Splunk. Configure Security Groups, IAM policies, and roles in AWS\Azure. Troubleshoot production issues and ensure system reliability. Collaborate with development teams to integrate DevOps and MLOps practices. Create comprehensive documentation and provide technical guidance. Continuously evaluate and integrate new AWS services and technologies Cloud engineering certifications (AWS, Terraform) are a plus. Excellent communication and problem-solving skills. Minimum Qualifications: Bachelors Degree in Computer Science or equivalent experience. Minimum of five years in cloud engineering, DevOps, or Site Reliability Engineering (SRE). Hands-on experience with AWS and Azure cloud services, including IAM, Compute, Storage, ELB, RDS, VPC, TGW, Route 53, ACM, Serverless computing, Containerization, CloudWatch, CloudTrail, SQS, and SNS. Experience with configuration management tools like Ansible, Chef, or Puppet. Proficiency in Infrastructure as Code (IAC) using Terraform. Strong background in CI/CD pipelines using GitHub Actions and Azure DevOps. Knowledge of MLOps or LLMops practices. Proficient in scripting languages: Python, PowerShell, Bash. Ability to work collaboratively in a fast-paced environment. Preferred Qualifications: Advanced degree in a technical field. Extensive experience with ReactJS and modern web technologies. Proven leadership in agile and project management. Advanced knowledge of CI/CD and industry best practices in software development.

Posted 2 months ago

Apply

15 - 20 years

30 - 40 Lacs

Hyderabad

Hybrid

Naukri logo

Role: AWS DevOps Architect Lead Exp.: 15+ years Job Description: Overall, 15+ years of experience with 5 years of hands-on experience in deployment automation using IaC, Configuration management, Orchestration, Containerization, and running a complete CI/CD pipeline on both cloud and on-prem. Thorough understanding and hands-on skills in the below Infrastructure as Code: Terraform, AWS CloudFormation, Puppet. Source control: GitLab, GitHub. CI/CD: Jenkins, GitLab CICD. Containerization/Orchestration: Kubernetes, AWS ECS, AWS EKS, Docker. CDN: Akamai, AWS CloudFront. Monitoring: AWS Cloud watch, New Relic. Security: AWS Code Guru, Guard Duty, Security Hub, Snyk, Veracode, Rapid7. Programming/Scripting: Python, Shell scripting Good understanding of networking, security rules, firewalls, WAF, API gateways, and auto-scaling principles. Hands on experience using AWS (VPC, Subnets, ALB/NLB, RDS, ECS, SQS, Cognito, Lambda, Memcached ) is required. Understanding of Programming concepts and best practices is required. Experience dealing with production incidents in multi-tier application environment is required. Experience managing production workloads with Site Reliability Engineering best practices. Good understanding of various deployment strategies (Rolling updates, Blue/Green, Canary) Strong exposure on DevSecOps testing methods SAST, DAST, SCA is preferred.

Posted 2 months ago

Apply

5 - 7 years

20 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Our stack: Python, SQL, Airflow, PostgreSQL, MySQL, REST, Aptible, Docker, Tonic.ai, Terraform, Spark, Kafka, Fivetran, Databricks, AWS (S3, Lambda, Kinesis, RDS, Glue). Our workflow is trunk-based CI/CD, and our security/compliance posture is at the highest standards of healthcare, including HIPAA, HITRUST, SOC 2, CCPA. WHAT YOULL ACCOMPLISH Create compliance strategies (HIPAA, GDPR, CCPA), tooling, processes, and coaching that enables service and application teams to take full ownership of their data in a growing organization Architect and build data pipelines and data aggregation systems to deliver quality real-time and batch analytical reports Participate in hiring and mentoring of team members Assist and coach teams to optimize poorly performing data pipelines Work with the SRE team to establish best practices around database monitoring, alerting, and availability WHAT WE'RE LOOKING FOR 5+ years of software engineering experience, 6+ in data engineering Mastery of SQL and Python Solid background in processing and storing large scale data using distributed systems as well as a mastery of database designs and data warehousing Good understanding of a broad spectrum of data stores like PostgreSQL, MySQL, MongoDB, Redis, Snowflake and Redshift Experience building data pipelines using Spark, Kafka, Airflow Knowledge of software engineering best-practice development (e.g., linting, testing) Ability to collaborate and problem-solve across teams Excellent communication skills, both written and verbal Bachelors degree in C.S. or equivalent experience BONUS POINTS Experience building data infrastructure on AWS Prior DBA experience Prior experience with healthcare data (PHI/PII/HIPAA) Prior experience working with Kubernetes Extensive NoSQL knowledge

Posted 2 months ago

Apply

5 - 10 years

10 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Our client is seeking a highly skilled DevOps Consultant to join our team and drive the development, deployment, and maintenance of our cloud-native infrastructure. The ideal candidate will have hands-on experience with AWS , container orchestration , real-time data streaming , and robust monitoring/logging systems. You will play a critical role in ensuring scalability, performance, and reliability across our platform. Key Responsibilities: Design, implement, and manage CI/CD pipelines and DevOps best practices. Architect and maintain scalable infrastructure on AWS using services such as EC2 , S3 , RDS , VPC , and DMS . Containerize applications using Docker and orchestrate them with Kubernetes . Manage Kafka clusters and implement event-driven or real-time data streaming solutions. Set up, configure, and maintain monitoring and observability tools including Prometheus , Grafana , and the ELK Stack (Elasticsearch, Logstash, Kibana). Automate infrastructure deployment using Infrastructure as Code tools (e.g., Terraform, CloudFormation). Collaborate with development and operations teams to improve system reliability and deployment speed. Required Skills & Experience: Strong experience with AWS services , especially RDS , DMS , EC2, IAM, VPC, etc. Proficient in Docker and Kubernetes for container orchestration. Hands-on experience with Apache Kafka in a production environment. Expertise in setting up and managing monitoring/logging stacks : Prometheus , Grafana , and ELK . Solid scripting skills (e.g., Bash, Python, or similar). Familiar with GitOps and CI/CD tools like Jenkins, GitLab CI, ArgoCD, etc. Experience working in Agile environments and with cross-functional teams. Preferred Qualifications: AWS Certification (Solutions Architect / DevOps Engineer) Experience with security best practices in cloud environments Familiarity with cost optimization and high-availability design patterns Prior experience with large-scale system migrations or hybrid cloud environments

Posted 2 months ago

Apply

8 - 12 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

FICO's Platform Capabilities Foundational Services team is seeking an experienced AWS cloud engineer with a strong background in AWS engineering & support, DevOps, and infrastructure services to join our team. Experience with Message streaming technologies such as Kafka or Pulsar is a strong plus. The ideal candidate will have a deep understanding of AWS & it's portfolio of services, shared core infrastructure, and automation, along with a proven track record of success. In this role, you will work closely with teams across many disciplines designing and supporting high performance shared infrastructure & services in AWS that support critical product requirements. You will work closely with FICO architects, developers, and engineers to ensure our AWS cloud-based systems and services meet FICO standards & industry accepted best practices to maintain security, performance, and stability. Director, Cloud Engineering What Youll Contribute Design, build, and maintain core shared infrastructure and enterprise-scale AWS account lifecycle utilizing a number of available AWS services such as Route53, EC2, RDS, DynamoDB, AWS Organizations, and more. Design, build, and maintain distributed systems in AWS using streaming and messaging technologies such as Kafka and Pulsar. Work closely with developers and other engineers to ensure systems & services meet performance, availability, and scalability requirements. Design and implement best practices for security, monitoring, and maintenance of systems. Troubleshoot and resolve issues related to systems and services owned by the team. Document architecture, implementation, and operation procedures. Stay up to date with the latest developments in relevant technologies. What Were Seeking Bachelor's degree in a computer science field or equivalent work experience. Experienced in AWS engineering (EC2, RDS, EKS, ECS, S3, EBS, IAM, DynamoDB, Cloudwatch, Cloudtrail, Organizations). Experience desired for Pulsar and Kafka or other similar message streaming technologies. Experience in designing and implementing distributed shared infrastructure using a mixture of traditional on-premises, SaaS, and AWS cloud services. Experience with Kubernetes, Docker, Helm, and other container technologies. Strong understanding of networking, security, automation, and performance optimization in AWS. Preferred AWS Certification. Experience managing systems at scale using infrastructure as code (IaaC) tools such as Terraform or Cloudformation. Experience with monitoring and logging technologies such as Cloudwatch, AppDynamics, Prometheus (Grafana), Splunk, Cloudtrail, etc. Excellent communication and collaboration skills. Ability to work in a fast-paced, team-oriented environment. Strong Linux and Windows Systems Engineering experience. Preferred Jenkins, CI/CD, and scripting experience. Desirable DevOps and Agile project experience.

Posted 2 months ago

Apply

7 - 12 years

9 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

The Role Within our Database Administration team at Kyndryl, you'll be a master of managing and administering the backbone of our technological infrastructure. You'll be the architect of the system, shaping the base definition, structure, and documentation to ensure the long-term success of our business operations. Your expertise will be crucial in configuring, installing and maintaining database management systems, ensuring that our systems are always running at peak performance. You'll also be responsible for managing user access, implementing the highest standards of security to protect our valuable data from unauthorized access. In addition, you'll be a disaster recovery guru, developing strong backup and recovery plans to ensure that our system is always protected in the event of a failure. Your technical acumen will be put to use, as you support end users and application developers in solving complex problems related to our database systems. As a key player on the team, you'll implement policies and procedures to safeguard our data from external threats. You will also conduct capacity planning and growth projections based on usage, ensuring that our system is always scalable to meet our business needs. You'll be a strategic partner, working closely with various teams to coordinate systematic database project plans that align with our organizational goals. Your contributions will not go unnoticed - you'll have the opportunity to propose and implement enhancements that will improve the performance and reliability of the system, enabling us to deliver world-class services to our customers. Who You Are Youre good at what you do and possess the required experience to prove it. However, equally as important you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused someone who prioritizes customer success in their work. And finally, youre open and borderless naturally inclusive in how you work with others. Required Technical and Professional Experience: Minimum of 7+ years of Experience in Database Administration Configure Databases in accordance with logical data model. Database users creation with required privileges. Assist in debugging application performance issues. Perform SQL query tuning to improve performance. Assist in debugging of application related issues. Execute DLL for Database objects with scripts provided by apps teams. Strong understanding of MySQL database concepts, including query optimization, indexing, and database design. Expertise in managing Amazon Aurora MySQL clusters, including configuration options and scaling mechanisms. Proficiency in AWS services like IAM, CloudWatch, and RDS. Experience with database scripting languages like SQL and PL/SQL for automation and data manipulation. Familiarity with database security best practices, including data encryption and access control. Ability to monitor and analyse database performance metrics to identify bottlenecks and optimize queries. Strong problem-solving and troubleshooting skills Provide response Time SLA attainment analysis for all the reported incidents. Check the Top sessions and SQLs using more CPU and high IO. Periodically check the tables without PK or indexes or having more indexes. Check tables need partitioning. Check the database/schema sizes periodically to know the rate of data growth. Manage Database Incidents using the Air Canada Incident Management process Preferred Technical and Professional Experience: Create and maintain standard operating procedures and documentation as appropriate. Perform data archival and purging as per the capacity planning report with assistance from application owners. Provide Change Success report. Use appropriate Change Management process and tools. Provide RCA resolution and submission within agreed timelines. Work with Physical DBAs on software Upgrades & Fixes by following the appropriate Change Management Process. Work closely with application and physical DBA teams to assist where applicable. Create and maintain logical database standards and policies. Monitor application sessions for transaction volumes, response times, concurrency levels, etc,

Posted 2 months ago

Apply

3 - 7 years

12 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking a skilled Fullstack Developer with expertise in ReactJS and Java to lead our backend team.Develop and maintain scalable web applications using ReactJS for frontend and Java (Spring Boot) for backend

Posted 2 months ago

Apply

8 - 11 years

15 - 27 Lacs

Chennai, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Role & responsibilities 1. Scripting and Automation: Develop and maintain scripts using Python, Bash, Perl, Ruby, and Groovy for automation tasks. Implement and manage infrastructure as code using tools like Terraform and AWS CloudFormation. 2. Cloud Infrastructure Management: Design, deploy, and manage AWS services such as EC2, S3, RDS, Lambda, and CloudFormation. • Ensure the scalability, performance, and reliability of cloud infrastructure. 3. Configuration Management: Utilize server configuration management tools like Salt, Puppet, Ansible, and Chef to automate system configurations. Maintain and troubleshoot Linux-based environments. 4. CI/CD Pipeline Development: Build and manage service delivery pipelines using CI/CD tools such as Jenkins, GitLab, or CodePipeline. Ensure seamless integration and deployment of applications. 5. System Security and Best Practices: Implement best practices for system security and ensure compliance with security standards. Monitor and manage system performance using tools like AWS CloudWatch and Datadog. 6. Networking and Virtualization: Configure and manage virtualized environments and containerization technologies. Maintain solid networking knowledge, including OSI network layers and TCP/IP protocols. 7. Collaboration and Communication: Work closely with product engineers to develop tools and services that enhance productivity. Communicate effectively with team members and stakeholders to ensure alignment on project goals. 8. Continuous Improvement: Stay updated with the latest cloud technologies and best practices. Continuously improve infrastructure and automation processes to enhance efficiency and reliability Preferred candidate profile Mandatory Skills: AWS Infrastructure Engineer/Cloud Engineer/Python or bash shell scripts, Perl, Ruby, Groovy Perks and benefits

Posted 2 months ago

Apply

3 - 8 years

15 - 25 Lacs

Pune, Delhi NCR, Bengaluru

Hybrid

Naukri logo

Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Required Qualifications: 1. bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB, Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 2 months ago

Apply

3 - 5 years

6 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

Position Scope: May lead small project teams or project phases of larger scope Works independently with minimal guidance and direction Impacts a range of customer, operational, project or service activities within own team and related work teams Contributes to the development of concepts, methods, and techniques Moderate impact on the functional/business unit Functional Knowledge: Requires in-depth knowledge of principles, concepts within own function/specialty and basic knowledge of other related areas. Applies broader knowledge of industry standards/practices to assignments Problem Solving & Critical Thinking: Solves variety of problems of moderately complex or unusual within own area Applies independent judgement to develop creative and practical solutions based on the analysis of multiple factors Anticipates and identifies problems and issues Leadership: Guided by area goals and objectives May provide technical direction to others around the completion of short-term work goals Collaboration: Trains and guides others in work area on technical skills Networks with senior colleagues in own area of expertise Education & Experience: Bachelors degree in Computer Science, Engineering, or related field with at least 3 years of experience or a Masters degree; OR in lieu of a Bachelors degree, at least 5 years of experience Understanding of utilizing Agile software development methodologies Deep knowledge of at least one programming language along with ability to execute on complex programming tasks. Ability to document, track and monitor a problem/issue to a timely resolution Knowledge of operating systems Collaborative problem-solving ability and self-motivated Strong verbal and written communication skills along with prioritization of duties

Posted 2 months ago

Apply

5 - 10 years

20 - 25 Lacs

Kolkata

Work from Office

Naukri logo

Backend Development: Design, develop, and optimize backend services using Java Spring Boot and Microservices architecture. API Development & Integration: Build and maintain RESTful and GraphQL APIs for seamless communication between services. Frontend Understanding: Have a strong grasp of frontend technologies to collaborate effectively with frontend teams for both web and mobile applications. Database Management: Work with SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, DynamoDB) databases to optimize data storage and retrieval. DevOps & Cloud: Deploy, monitor, and manage applications in AWS (EC2, S3, Lambda, RDS, API Gateway, etc.). Scalability & Performance: Implement solutions to optimize performance, scalability, and security of applications. Code Reviews & Best Practices: Conduct code reviews, write clean, reusable, and efficient code following best practices. Collaboration: Work closely with cross-functional teams including frontend developers, DevOps engineers, and product managers to develop end-to-end solutions. Automation & CI/CD: Implement CI/CD pipelines and automated testing strategies for efficient development workflows. Security & Compliance: Ensure data security, authentication, and authorization mechanisms are in place. Experience with Serverless computing (AWS Lambda, Firebase Functions, etc.). Familiarity with Event-Driven Architecture using Kafka, RabbitMQ, or similar tools. Exposure to Machine Learning & AI frameworks (TensorFlow, PyTorch) is a plus. Experience with Mobile App Development (React Native, Flutter, or Native development). Contributions to open-source projects or active participation in developer communities.

Posted 2 months ago

Apply

3 - 6 years

10 - 15 Lacs

Pune

Work from Office

Naukri logo

Role & responsibilities Requirements- -3+ years of hands-on experience with AWS services including EMR, GLUE, Athena, Lambda, SQS, OpenSearch, CloudWatch, VPC, IAM, AWS Managed Airflow, security groups, S3, RDS, and DynamoDB. -Proficiency in Linux and experience with management tools like Apache Airflow and Terraform. Familiarity with CI/CD tools, particularly GitLab. Responsibilities- -Design, deploy, and maintain scalable and secure cloud and on-premises infrastructure. -Monitor and optimize performance and reliability of systems and applications. -Implement and manage continuous integration and continuous deployment (CI/CD) pipelines. -Collaborate with development teams to integrate new applications and services into existing infrastructure. -Conduct regular security assessments and audits to ensure compliance with industry standards. -Provide support and troubleshooting assistance for infrastructure-related issues. -Create and maintain detailed documentation for infrastructure configurations and processes.

Posted 2 months ago

Apply

6 - 10 years

65 - 70 Lacs

Chennai, Pune, Kolkata

Work from Office

Naukri logo

For a leading MNC in Fintech , we are seeking a seasoned DevOps Engineer to own parts of our cloud infrastructure and DevOps operations. In this role, you will lead by example, and design, deploy, and optimize our AWS-based infrastructure, ensuring seamless orchestration of workloads across Kubernetes and serverless environments like AWS Lambda. You will play a pivotal role in automating processes, enhancing system reliability, and driving the adoption of DevOps best practices. Collaborating closely with our Engineering, Product, and Data teams, youll contribute to scaling our infrastructure and supporting our rapid growth. This position offers a unique opportunity to refine your technical expertise in a dynamic and fast-paced environment. Location : Remote (Pan India Candidate can apply) Delhi / NCR,Bangalore/Bengaluru,Hyderabad/Secunderabad,Chennai,Pune,Kolkata,Ahmedabad,Mumbai Responsibilities Own and drive the architecture, design, and scaling of various parts of our cloud infrastructure on AWS, ensuring security, resilience, and cost efficiency Optimize Kubernetes clusters, including advanced scheduling, networking, and security enhancements to support mission-critical workloads Architect and improve CI/CD pipelines, incorporating automation, canary deployments, and rollback strategies for seamless releases Design and implement monitoring, logging, and observability solutions to ensure proactive issue detection and system performance tuning at scale Establish and enforce security best practices, including IAM governance, secret management, and compliance frameworks Be the go-to expert for multiple infrastructure components, providing technical leadership and driving improvements across interconnected systems Lead large-scale projects spanning multiple quarters, defining roadmaps, tracking progress, and ensuring timely execution with minimal supervision Drive collaboration with cross-functional teams, including ML, Data, and Product, to align infrastructure solutions with business and engineering goals Mentor and support junior and mid-level engineers, fostering a culture of continuous learning, technical excellence, and best practices Set and refine DevOps standards, driving automation, scalability, and system reliability across the organization Qualifications A minimum of 7 years of experience in DevOps, SRE, or a similar role, with expertise in designing and managing large-scale cloud infrastructure Experience working on software product development, with proficiency in a mainstream stack. Deep hands-on experience with AWS services such as EC2, S3, RDS, Lambda, ECS, EKS, and VPC networking Advanced proficiency in Terraform for infrastructure as code, including best practices for scaling and managing cloud resources Strong expertise in Kubernetes, including cluster provisioning, networking, security hardening, and performance optimization Proficiency in scripting and automation using Python, Bash, or Go, with experience integrating APIs and optimizing workflows Experience designing and maintaining CI/CD pipelines using tools like CircleCI, Jenkins, GitLab CI/CD, or ArgoCD Strong knowledge of monitoring, logging, and observability tools such as DataDog, Prometheus, Grafana, and AWS CloudWatch Deep understanding of cloud security, IAM governance, role-based access control (RBAC), and compliance frameworks like SOC2 or ISO 27001 Proven ability to lead and mentor junior engineers while fostering a collaborative and high-performance team culture Excellent communication skills, with the ability to work effectively across cultures, functions, and time zones in a globally distributed team

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies