Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
13 - 16 years
15 - 18 Lacs
Hyderabad
Work from Office
Responsibilities: This role provides counsel and advice to management on significant Infrastructure matters, often requiring coordination between organizations. Serves as the Sr. Advisor responsible for managing database structure for Big Data of information technology solutions. Leads the analysis and implementation of engineering infrastructure solutions of projects and/or work requests for complex business solutions. Other key responsibilities include: Provide support for Big Data Databricks, Snowflake, etc. Services. Implement and maintain Databricks platform on AWS & Azure. Experience in managing Databricks account administration, workspace administration, Cluster Policies and Unity Catalog. Implement and maintain Snowflake platform on AWS & Azure. Experience in monitoring, performance tuning and cost optimization. Strong experience in AWS technologies S3, EC2, EKS, ECS, Lambda, Route 53, EMR. CloudWatch, Cloud trail & KMS experience Implement, maintain, and optimize CI/CD & Infrastructure as Code pipelines along with experience in Jenkins, GitHub & Terraform Proficient with languages like SQL, Python, PySpark and GO Experience in IT service management Incident Management, Problem Management, Change Management, ITIL Experience in working with vendor support to resolve technical issues. Qualifications Required Skills: Proven experience with Databricks and Snowflake platform on AWS & Azure Solid grasp of S3, EC2, EKS, ECS, Lambda, Route 53, EMR. CloudWatch, Cloud trail & KMS experience Strong troubleshooting skills to identify and resolve issues efficiently. Excellent teamwork and communication skills, enabling effective collaboration with cross-functional teams. Prior experience required in IT Service Management - Incident Management, Problem Management, Change Management, ITIL Strong vendor management skills and performance turning required. Required Experience & Education: 13 to 16 years of experience Bachelors degree or better preferred
Posted 2 months ago
3 - 5 years
5 - 8 Lacs
Hyderabad
Work from Office
Position Summary: Evernorth, a leading Health Services company, is looking for exceptional data engineers/developers in our Data and Analytics organization. In this role, you will actively participate with your development team on initiatives that support Evernorth's strategic goals as well as subject matter experts to understand business logic you will be engineering. As a software engineer, you will help develop an integrated architectural strategy to support next-generation reporting and analytical capabilities on an enterprise-wide scale. You will work in an agile environment, delivering user-oriented products which will be available both internally and externally by our customers, clients, and providers. Candidates will be provided the opportunity to work on a range of technologies and data manipulation concepts. Specifically, this may include developing healthcare data structures and data transformation logic to allow for analytics and reporting for customer journeys, personalization opportunities, pre-active actions, text mining, action prediction, fraud detection, text/sentiment classification, collaborative filtering/recommendation, and/or signal detection. This position will involve taking these skills and applying them to some of the most exciting and massive health data opportunities that exist here at Evernorth. The ideal candidate will work in a team environment that demands technical excellence, whose members are expected to hold each other accountable for the overall success of the end product. Focus for this team is on the delivery of innovative solutions to complex problems, but also with a mind to drive simplicity in refining and supporting of the solution by others. Job Description & Responsibilities: Be accountable for delivery of business functionality. Work on the AWS cloud to migrate/re-engineer data and applications from on premise to cloud. Responsible for engineering solutions conformant to enterprise standards, architecture, and technologies Provide technical expertise through a hands-on approach, developing solutions that automate testing between systems. Perform peer code reviews, merge requests and production releases. Implement design/functionality using Agile principles. Proven track record of quality software development and an ability to innovate outside of traditional architecture/software patterns when needed. A desire to collaborate in a high-performing team environment, and an ability to influence and be influenced by others. Have a quality mindset, not just code quality but also to ensure ongoing data quality by monitoring data to identify problems before they have business impact. Be entrepreneurial, business minded, ask smart questions, take risks, and champion new ideas. Take ownership and accountability. Experience Required: 3 to 5 years of experience in application program development Experience Desired: Knowledge and/or experience with healthcare information domains. Documented experience in a business intelligence or analytic development role on a variety of large-scale projects. Documented experience working with databases larger than 5TB and excellent data analysis skills. Experience with TDD/BDD Experience working with SPARK and real time analytic frameworks Education and Training Required: Bachelors degree in Engineering, Computer Science Primary Skills: PYTHON, Databricks, TERADATA, SQL, UNIX, ETL, Data Structures, Looker, Tableau, GIT, Jenkins, RESTful & GraphQL APIs. AWS services such as Glue, EMR, Lambda, Step Functions, CloudTrail, CloudWatch, SNS, SQS, S3, VPC, EC2, RDS, IAM Additional Skills: Ability to rapidly prototype and storyboard/wireframe development as part of application design. Write referenceable and modular code. Willingness to continuously learn & share learnings with others. Ability to communicate design processes, ideas, and solutions clearly and effectively to teams and clients. Ability to manipulate and transform large datasets efficiently. Excellent troubleshooting skills to root cause complex issues
Posted 2 months ago
10 - 15 years
32 - 37 Lacs
Hyderabad
Work from Office
Position Overview: Join the team that manages the end-to-end technology and life cycle management of the applications hosted both on premise and in the AWS cloud. We are looking for an experienced Senior Software Engineer with more than 10 years of experience to join the Data and Analytics Engineering (D&AE) operational effectiveness team. The Senior Advisor will be responsible for ensuring the development of reports and data feeds using SAS based platform and migration of reports to a cloud-based platform and, for the over-all team deliverables. The ideal candidate will have a strong technical background and experience in creating & managing complex data queries and pipelines. Responsibilities: Strong knowledge of SAS with programming experience Develop and manage report and data feeds involving complex queries. Perform root cause analysis on for data pipeline failures and data discrepancies. Work with business partners to resolve defects identified in a timely manner. Understand the requirements and use cases provided. Automation of application health checks and manual workarounds using python. Enable the self-healing capabilities to reduce the human interference, user impact and to increase the availability. Partner with other teams and stake holders for any clarifications needed for the timely deliverables. Ability to provide alternative solutions. Experienced with agile methodology. Mentor the junior members in the team on the industry best practices, adoption, and maturity. Help the team members and users on the technical challenges. Responsible for the team deliverables Qualifications Required Skills: Experience with large datasets management Experience with data pipeline scheduling Experience with AWS services IAM, EC2, S3, Cloud Watch, StepFunctions, Lambdas, MWAA, SQS, SNS, Glue, Athena etc. Experience with continuous integration and continuous delivery (CI/CD) tools like GitHub, Jenkins etc. Experience with MS Office suite & VBA along with good presentation skills. Experience with monitoring and logging tools such as Dynatrace or Splunk etc. Excellent problem-solving skills and attention to details Strong communication and collaboration skills Required Experience & Education: Bachelors degree in the related field. 10+ years of experience in developer role. 6+ years of hands-on development experience with Python 6+ years of experience with SAS programming 4+ years of relational database management systems like Oracle, SQL server OR Teradata, MongoDB OR POSTGRESQL 3+ years of experience in AWS. Desired Experience: AWS certification preferred
Posted 2 months ago
10 - 16 years
5 - 15 Lacs
Gurgaon
Remote
Requirements: 10 years of experience as a Senior SQL Database Administrator AWS Cloud SQL development AWS cloud services (RDS, EC2, S3) Azure CI/CD pipelines code deployment, debugging, report generation Database replication
Posted 2 months ago
5 - 10 years
13 - 23 Lacs
Bengaluru
Work from Office
Hi, Greetings from Sun Technology Integrators!! This is regarding a job opening with Sun Technology Integrators, Bangalore. Please find below the job description for your reference. Kindly let me know your interest and share your updated CV to nandinis@suntechnologies.com with the below details ASAP. C.CTC- E.CTC- Notice Period- Current location- Are you serving Notice period/immediate- Exp in Snowflake- Exp in Matillion- 2:00PM-11:00PM-shift timings (free cab facility-drop) +food Please let me know, if any of your friends are looking for a job change. Kindly share the references. Only Serving/ Immediate candidates can apply. Interview Process-2 Rounds(Virtual)+Final Round(F2F) Please Note: WFO-Work From Office (No hybrid or Work From Home) Mandatory skills : Snowflake, SQL, ETL, Data Ingestion, Data Modeling, Data Warehouse,Python, Matillion, AWS S3, EC2 Preferred skills : SSIR, SSIS, Informatica, Shell Scripting Venue Details: Sun Technology Integrators Pvt Ltd No. 496, 4th Block, 1st Stage HBR Layout (a stop ahead from Nagawara towards to K. R. Puram) Bangalore 560043 Company URL: www.suntechnologies.com Thanks and Regards,Nandini S | Sr.Technical Recruiter Sun Technology Integrators Pvt. Ltd. nandinis@suntechnologies.com www.suntechnologies.com
Posted 2 months ago
5 - 10 years
6 - 10 Lacs
Kolkata
Work from Office
Seeking an AWS-certified professional with expertise in cloud platforms, serverless architecture, monitoring, and highly available systems to manage, optimize, and secure AWS infrastructure while leading and mentoring teams. Key Skills: - AWS Services: IAM, EC2, VPC, ELB/ALB, Auto Scaling, Lambda - AWS Managed Products: EKS, ECS, ECR, Route 53, SES, ElastiCache, RDS, Redshift - Cloud Platforms: Expertise in AWS infrastructure and services - Serverless Development Architecture - Operating Systems: Linux - Monitoring and Alerting: Implementing and improving monitoring stacks - Security: SSH, cloud connectivity, and security protocols - System Reliability: High availability, production systems, and configuration management - Automation and Scripting: Installing and enhancing scripts - Team Leadership: Mentoring and guiding teams on new technologies - Certifications: AWS Certified Solutions Architect, Developer, DevOps Engineer, SysOps Administrator
Posted 2 months ago
3 - 5 years
40 - 45 Lacs
Bhubaneshwar, Kochi, Kolkata
Work from Office
We are seeking experienced Data Engineers with over 3 years of experience to join our team at Intuit, through Cognizant. The selected candidates will be responsible for developing and maintaining scalable data pipelines, managing data warehousing solutions, and working with advanced cloud environments. The role requires strong technical proficiency and the ability to work onsite in Bangalore. Key Responsibilities: Design, build, and maintain data pipelines to ingest, process, and analyze large datasets using PySpark. Work on Data Warehouse and Data Lake solutions to manage structured and unstructured data. Develop and optimize complex SQL queries for data extraction and reporting. Leverage AWS cloud services such as S3, EC2, EMR, Athena, and Redshift for data storage, processing, and analytics. Collaborate with cross-functional teams to ensure the successful delivery of data solutions that meet business needs. Monitor data pipelines and troubleshoot any issues related to data integrity or system performance. Required Skills: 3 years of experience in data engineering or related fields. In-depth knowledge of Data Warehouses and Data Lakes. Proven experience in building data pipelines using PySpark. Strong expertise in SQL for data manipulation and extraction. Familiarity with AWS cloud services, including S3, EC2, EMR, Athena, Redshift, and other cloud computing platforms. Preferred Skills: Python programming experience is a plus. Experience working in Agile environments with tools like JIRA and GitHub.
Posted 2 months ago
8 - 13 years
25 - 40 Lacs
Chennai, Bengaluru, Coimbatore
Hybrid
Job Title : Lead Cloud Engineers Experience: 6+ Years Location: Chennai/Coimbatore/Bangalore Job Overview: Requirement: 1. Cloud: (Mandatory): Proven technical experience with AWS, scripting, and automation Hands-on knowledge on services and implementation such as Landing Zone, Control Tower, Transit Gateway, Cloud Front, IAM, VPC, EC2, S3, Lambda, Load Balancers, Auto Scaling, etc. Experience in scripting languages such as Python, Bash, Ruby, Groovy, Java, JavaScript. 2. Automation (Mandatory): Hands-on experience with Infrastructure as Code automation (IaC) and Configuration Management tools such as: Terraform, Cloud Formation, Azure ARM, Bicep, Ansible, Chef, or Puppet. 3. CI/CD (Mandatory): Hands-on experience in setting up or developing CI/CD pipelines using any of the Tools such as (Not Limited to): Jenkins, AWS Code Commit, Code Build, Code Pipeline, Code Deploy, GitLab CI, Azure DevOps. 4. Containers & Orchestration (Good to have): Hands-on experience in provisioning and managing containers and orchestration Solutions such as: Docker & Docker Swarm Kubernetes (Private\Public Cloud platforms) Open Shift Helm Charts Certification Expectations 1. Cloud: Certification (Mandatory, any of): AWS Certified SysOps Administrator Associate AWS Certified Solutions Architect – Associate AWS Certified Developer – Associate Any AWS Professional/Specialty certification(s) 2. Automation: (Optional, any of): Red Hat Certified Specialist in Ansible Automation HashiCorp Terraform Certified Associate 3. CI-CD: (Optional) Certified Jenkins Engineer 4. Containers & Orchestration (Optional, any of): CKA (Certified Kubernetes Administrator) Red Hat Certified Specialist in Open Shift Administration Responsibilities: Lead architecture and design discussions with architects and clients. Understanding of technology best practices and AWS frameworks such as “Well Architected Framework” Implementing solutions with an emphasis on Cloud Security, Cost Optimization, and automation. Independently handle customer engagements and new deals. Ability to manage teams and derive results. Ability to initiate proactive meetings with Leads and extended teams to highlight any gaps/delays or other challenges. Subject Matter Expert in technology. Ability to train\mentor the team in functional and technical skills. Ability to decide and provide adequate help on the career progression of people. Handle assets development. Support to the application team – Work with application development teams to Design, implement and where necessary, automate infrastructure on cloud platforms. Continuous improvement - Certain engagements will require you to support and maintain existing cloud environments with an emphasis on continuously Innovating through automation and enhancing stability/availability through monitoring and improving the security posture.
Posted 2 months ago
5 - 8 years
3 - 7 Lacs
Bengaluru, Hyderabad
Work from Office
Key Responsibilities: Design, implement, and maintain cloud-based infrastructure on AWS. Manage and monitor AWS services, including EC2, S3, Lambda, RDS, CloudFormation, VPC, etc. Develop automation scripts for deployment, monitoring, and scaling using AWS services. Collaborate with DevOps teams to automate build, test, and deployment pipelines. Ensure the security and compliance of cloud environments using AWS security best practices. Optimize cloud resource usage to reduce costs while maintaining high performance. Troubleshoot issues related to cloud infrastructure and services. Participate in capacity planning and disaster recovery strategies. Monitor application performance and make necessary adjustments to ensure optimal performance. Stay current with new AWS features and tools and evaluate their applicability for the organization. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as an AWS Engineer or in a similar cloud infrastructure role. In-depth knowledge of AWS services, including EC2, S3, RDS, Lambda, VPC, CloudWatch, etc. Proficiency in scripting languages such as Python, Shell, or Bash. Experience with infrastructure-as-code tools like Terraform or AWS CloudFormation. Strong understanding of networking concepts, cloud security, and best practices. Familiarity with containerization technologies (e.g., Docker, Kubernetes) is a plus. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication skills, both written and verbal. AWS certifications (AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are preferred. Preferred Skills: Experience with serverless architectures and services. Knowledge of CI/CD pipelines and DevOps methodologies. Experience with monitoring and logging tools like CloudWatch, Datadog, or Prometheus. Knowledge in AWS FinOps.
Posted 2 months ago
5 - 7 years
3 - 7 Lacs
Bengaluru, Hyderabad
Work from Office
Key Responsibilities: Design, implement, and maintain cloud-based infrastructure on AWS. Manage and monitor AWS services, including EC2, S3, Lambda, RDS, CloudFormation, VPC, etc. Develop automation scripts for deployment, monitoring, and scaling using AWS services. Collaborate with DevOps teams to automate build, test, and deployment pipelines. Ensure the security and compliance of cloud environments using AWS security best practices. Optimize cloud resource usage to reduce costs while maintaining high performance. Troubleshoot issues related to cloud infrastructure and services. Participate in capacity planning and disaster recovery strategies. Monitor application performance and make necessary adjustments to ensure optimal performance. Stay current with new AWS features and tools and evaluate their applicability for the organization. Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as an AWS Engineer or in a similar cloud infrastructure role. In-depth knowledge of AWS services, including EC2, S3, RDS, Lambda, VPC, CloudWatch, etc. Proficiency in scripting languages such as Python, Shell, or Bash. Experience with infrastructure-as-code tools like Terraform or AWS CloudFormation. Strong understanding of networking concepts, cloud security, and best practices. Familiarity with containerization technologies (e.g., Docker, Kubernetes) is a plus. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication skills, both written and verbal. AWS certifications (AWS Certified Solutions Architect, AWS Certified DevOps Engineer, etc.) are preferred. Preferred Skills: Experience with serverless architectures and services. Knowledge of CI/CD pipelines and DevOps methodologies. Experience with monitoring and logging tools like CloudWatch, Datadog, or Prometheus. Knowledge in AWS FinOps
Posted 2 months ago
8 - 10 years
25 - 30 Lacs
Hyderabad
Work from Office
The primary responsibility of an Application Development Advisor in Cigna Specialty Technology (comprising Dental, Vision, Supplemental Health, and Stop Loss) is to deliver and support working software, where our team leverages technologies including JavaScript, Angular, HTML, Python, Java, .NET C#, Oracle PL/SQL, Oracle Apex, and other technologies. Software includes smaller-scale business tools, medium-scale business applications serving the needs of departmental units, larger-scale platforms that administer Cigna's Specialty product solutions offered to our customers and clients, and system-to-system integration services between software units to support the flow of information across systems for end-to-end processing. Application Development Advisors collaborate within agile scrum teams, delivering software using SDLC and Agile best practices, industry standards, and Cigna guidelines to meet the needs of our businesses and to ensure quality, effectiveness, and scalability for growth. Our team members work closely with business teams and other technology units within the Cigna Enterprise to deliver end-to-end solutions, requiring skills to understand the businesses where Cigna Specialty Technology operates; to refine the requirements for solutions needed and to uncover dependencies; to work closely with Systems Architects, Product Owners and Production Support teams in designing, coding, and testing; and ultimately to deliver working software that integrates within the broader Cigna ecosystem and enables our Cigna Specialty businesses We are looking to our application development advisors to bring a DevOps Mindset, characterized with an automation-first and continuous improvement orientation. Candidates should have expertise in software engineering as well as SDLC using Agile methodologies. Experience with both creating web-based applications either directly or with low-code tools, as well as designing and delivering integration services designed through developing an understanding of the interrelationships between systems and building software using library calls, REST APIs, database queries, etc. is most desired in order to achieve desired outcomes. In the end, the software we produce is all about creating business value for our customers and clients within the Cigna Specialty product space. Job Description & Responsibilities: Collaborate, learn, and deliver software as part of an Agile scrum team Learning the Cigna Supplemental Health businesses to work directly with Cigna Specialty business stakeholders in understanding needs and requirements Demonstrated skill in using coding standards, reusable code and being an active participant in code reviews. Strong understanding of development and testing techniques and toolsets. Design, configuration, implementation of middleware products and application design/development within the supported technologies and products Supporting applications through proactive monitoring and troubleshooting, and management design of supported applications assuring performance, availability, security, and capacity Incorporating automated testing, CI/CD, and an Agile/Lean mindset in both collaboration with colleagues as well as in software delivered Experience Required: Typically, 8+ years of experience in IT, specifically within application development or integration services development. +5 years of experience within the following areas are required: Web services experience using Python based frameworks Database experience, including data modelling and authoring stored procedures, leveraging either Oracle, MS SQL Server, or PostgreSQL Experience using Python Experience Desired: Experience within the following technologies is desired: Python Framework Oracle PL/SQL Stored Procedures and Web Services Web Development with UI development using HTML and JavaScript, leveraging either Angular or React Exposure to serverless event driven frameworks in - EC2, EKS, ECS, Lambda, Step functions, SQS, SNS, Jenkins pipelines, API Gateway AWS specific tools with Python developing required with Devops background. Experience with Agile development (SCRUM methodology) Good to have. Education and Training Required: Primary Skills: Advanced concepts with relevant, hands-on experience in many of the following areas are generally preferred: Build serverless, message, and event driven interfaces using available cloud solutions. Help design, create, and manage continuous delivery pipelines for your teams code and deliverables using Jenkins/GitHub/Airflow/Terraform. Message and event driven architecture Enterprise Integration Patterns Interface with data behind-the-scenes by creating APIs with Python with REST frameworks. Database development & tuning Performance (threading, indexing, clustering, caching) Transaction Management Document-centric data architecture (XML DB/NoSQL/JSON) UI development (HTML5, Angular, Bootstrap) Additional Skills: Constantly look and evaluate new technologies to see if they can bring value to the organization. Leverage existing open-source frameworks, third party components/libraries to develop robust enterprise solutions. Mentality towards integrating and reusing existing capabilities vs building from scratch is highly desired. Analytical Skills: Candidate must be able to recognize the needs of customers and create simple solutions that answer those needs. Ability to document analysis and scenarios is critical. Communication: Candidate must be able to clearly communicate their ideas to peers, stakeholders, and management. Creativity: Creativity is needed to help invent new ways of approaching problems and developing innovative applications as well as bringing experience from other industries. Customer-Service: If dealing directly with clients and customers, candidate would need good customer service skills and consultant mentality to answer questions and fix issues. Teamwork: Candidate must work well with others as part of a distributed agile (SAFe) team of developers, analysts, QA, and more. Industry Experience: Prior work experience within Insurance, Health Insurance, or Financial Services preferred.
Posted 2 months ago
5 - 8 years
5 - 8 Lacs
Bengaluru
Work from Office
Key Responsibilities Solid experience in AWS IaaS deployment Pipelines, IAM, VPCs, Security Groups, VPN, microservices, CloudTrail, etc. Knowledge of Amazon Web Services such as EC2, S3, SQS, Route53, Amplify, DynamoDB, Neptune. Experience in developing or administering the security of AWS cloud environments. Experience in cross-account deployment of resources using Pipelines, CodeCommit, CodeBuild. Practical knowledge of several security practices in SDLC and supporting IT security tools. Improve existing monitoring to provide end-to-end observability of our platform. Scale our platform and processes to continue serving our growing customer base Define and implement disaster recovery processes Automation scripting skills - Python or equivalent Build & support Site Reliability function & participate in building tools to report system KPIs Deliver tasks based on project objectives; technically support projects through to completion Must be able to work independently or with a team, under minimum supervision Articulate verbal and written communication Eagerness to share knowledge across engineering teams Has worked in a fast paced, dynamic environment Qualifications Bachelors or Master’s degree in Computer Science, a related field, or equivalent work experience Minimum of 4+ years of experience Prior experience working in an SRE/DevOps/Cloud Engineering role on a cross-functional agile team Experience working with industry standards or programs such as SOC2, ISO, HITRUST is a plus AWS Certification, CISSP, Security+ is a plus Ability to improve automation through the CI/CD pipeline through analysis of the current process using tools Experience developing deployment strategies for SaaS applications Additional Information At Privaini Software India Private Limited, we value diversity and always treat all employees and job applicants based on merit, qualifications, competence, and talent. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. onsibilities Preferred candidate profile Perks and benefits
Posted 2 months ago
5 - 8 years
15 - 25 Lacs
Pune, Indore
Hybrid
Hi, We are looking for a senior resource into AWS Cloud Solutions . You may also share your resume at pallavinis.londhe@infobeans.com JD in Brief: - Bachelors degree in Computer Science or a related field, with 5+ years of AWS administration experience. - Deep knowledge of AWS services, best practices, and optimization strategies.. - Strong understanding of underlying web and network technologies (HTTP, SSL, DNS, Application Load Balancer, etc.). - Experience with continuous integration and deployment tools (e.g., Jenkins, Maven, GitLab). - Proficiency in storage management, and user provisioning. - Working knowledge of infrastructure-as-code platforms such as CloudFormation or Terraform. - Proven experience enforcing security best practices, managing IAM roles, configuring firewalls, and ensuring compliance with SOC 2 and other industry standards. - Ability to work independently and collaboratively in a team environment. - Excellent communication and documentation skills. - Experience working in Agile/Scrum teams. - Proficiency in storage management, and user provisioning. - Experience defining and implementing AWS network resources, including VPCs, subnets, and route tables - Familiarity with high-availability AWS design patterns, including application load balancers, API Gateways and auto-scaling groups - Experience supporting network infrastructure tools such as VPNs, network threat management, vulnerability assessment, and firewalls - Experience applying optimization and cost-saving techniques for AWS storage services, including EBS, S3, EFS, and Glacier. - Experience working with a purely serverless architecture with AWS Lambda - Up-to-date knowledge of information security management frameworks and experience implementing technical controls aligned with security policies - Understanding of Information Technology security best practices to ensure safe handling and storage of sensitive data in the Cloud and data backup/recovery - Experience working with relational databases in RDS-managed environments - Experience working with NoSQL database systems like Amazon DynamoDB - Experience with blue/green, canary, and rolling deployment strategies - Certifications: AWS Certified DevOps Engineer - Ability to create scripts in Python
Posted 2 months ago
5 - 10 years
8 - 13 Lacs
Ahmedabad, Hyderabad
Work from Office
Key Responsibilities: Github CI/CD: Lead the development and maintenance of Github CI/CD pipelines. Design, implement, and maintain CI/CD pipelines using GitHub Actions, focusing on reusable workflows for efficiency. Cloud-Native Solutions: Design and implement cloud-native solutions using AWS and Azure services, integrating Github with various cloud components for seamless deployments. Problem Solving: Demonstrate strong problem-solving and troubleshooting skills to address complex technical issues and optimize system performance. DevOps & Agile Methodologies: Apply advanced DevOps principles and Agile practices, including Infrastructure as Code (IaC) and GitOps, to streamline and enhance development workflows. Infrastructure Management: Oversee the management of Linux-based infrastructure and understand networking concepts, including microservices communication and service mesh implementations. Containerization & Orchestration: Leverage Docker and Kubernetes for containerization and orchestration, with experience in service discovery, auto-scaling, and network policies. Automation & Scripting: Automate infrastructure management using advanced scripting and IaC tools such as Terraform, Ansible, Helm Charts, and Python. AWS and Azure Services Expertise: Utilize a broad range of AWS and Azure services, including IAM, EC2, S3, Glacier, VPC, Route53, EBS, EKS, ECS, RDS, Azure Virtual Machines, Azure Blob Storage, Azure Kubernetes Service (AKS), and Azure SQL Database, with a focus on integrating new cloud innovations. Incident Management: Manage incidents related to Github pipelines and deployments, perform root cause analysis, and resolve issues to ensure high availability and reliability. Development Processes: Define and optimize development, test, release, update, and support processes for Github CI/CD operations, incorporating continuous improvement practices. Architecture & Development Participation: Contribute to architecture design and software development activities, ensuring alignment with industry best practices and Github capabilities. Strategic Initiatives: Collaborate with the leadership team on process improvements, operational efficiency, and strategic technology initiatives related to Github and cloud services. Required Skills & Qualifications: Education: Bachelors or Masters degree in Computer Science, Engineering, or a related field. Experience: 5-10 years of hands-on experience with Github CI/CD, including implementing, configuring, and maintaining pipelines, along with substantial experience in AWS and Azure cloud services. Certifications: Github Actions Certified Specialist, AWS Certified Solutions Architect - Professional, and/or Azure Solutions Architect Expert certifications are highly desirable. Communication Skills: Excellent communication skills with the ability to mentor team members, collaborate effectively, and drive process improvements. Technical Expertise: Strong experience with Github, modern DevOps tools, cloud technologies (AWS and Azure), and infrastructure automation, including proficiency in containerization, orchestration, and CI/CD practices. Analytical Skills: Demonstrated problem-solving and analytical skills, with a proactive approach to learning and implementing new technologies. Agile/SCRUM Knowledge: Working knowledge of Agile/SCRUM methodologies and traditional SDLC project execution. Hands-on experience with Github CI/CD, including implementing, configuring, and maintaining pipelines, along with substantial experience in AWS and Azure cloud services.
Posted 2 months ago
7 - 8 years
12 - 17 Lacs
Hyderabad, Noida, Kolkata
Work from Office
At AppSquadz, we are looking for a Senior DevOps Engineer to help us build functional sy https://www.appsquadz.com/ stems that improve the customer experience. DevOps engineer can be responsible for deploying product updates, identifying production issues, and implementing integrations that meet our customers needs. The ideal candidate will have a solid background in software engineering and be familiar with CI/CD tools like Devops, and will work with developers and engineers to ensure that software development follows established processes and works as intended. The DevOps engineer will also be responsible in building/implementing new infrastructure needs in public cloud services. Objectives of this role Strong understanding of AWS infrastructure services, including AWS Lambda, Backup, RDS, EC2, VPC, S3, Load Balancers, CloudTrail, and CloudWatch. Hands-on experience with infrastructure as code (IaC) tools such as Terraform and CloudFormation. Expertise in containerization technologies like Docker and Kubernetes. Experience with monitoring and logging tools such as Prometheus and ELK Stack. Strong knowledge of networking concepts and security best practices. Solid understanding of firewalls, including Fortinet and Palo Alto. Excellent scripting skills in Bash, Python, or similar languages. Strong problem-solving and troubleshooting abilities. Required skills and qualifications. Experience as a DevOps Engineer or in a similar software engineering role Proficiency with at least one source code version control solution Github,Bitbucket. Proficient build and release management and experienced in one of the CI/CD tools (Devops/Jenkins, preferably Devops ) including setup, templating and configuration. Managed Docker Orchestration and Docker Containerization using Kubernetes. Experienced in using different devops tools for artifactory, vaults, code scanning tools. Problem-solving attitude Collaborative team spirit Experience: 7 to 8 yrs
Posted 2 months ago
7 - 11 years
30 - 35 Lacs
Bengaluru
Work from Office
1. The resource should have knowledge on Data Warehouse and Data Lake 2. Should aware of building data pipelines using Pyspark 3. Should be strong in SQL skills 4. Should have exposure to AWS environment and services like S3, EC2, EMR, Athena, Redshift etc 5. Good to have programming skills in Python
Posted 2 months ago
5 - 8 years
20 - 27 Lacs
Chennai, Mumbai, Bengaluru
Work from Office
We are looking for Python Developer. Candidate should have atleast 1 year experience in Django framework. Candidate should have atleast 1 year experience in AWS or Azure. Candidate should be good in PySpark. Django - Should be proficient in coding and must have worked on ateast 1 project, Flask, Fast API, DRF (Django REST framework), html, css, Javascript - React. AWS - Lambda and EC2 instance Docker - Should be able to create docker files Postgesql or anyother RDBMS is required. Middle ware might require Pandas or Numpy/SciPy libraries - a good to have skill - it can be learned. GeoLocation Data types work experience preferred. Minimum Jinja 2 is templating experience is expected - Jinja or Mako one of them is fine, to show that the candidate has worked with templating systems. Excellent written and verbal communication skills are required.
Posted 2 months ago
2 - 7 years
12 - 22 Lacs
Bengaluru
Work from Office
Location :Bangalore At Practo, we are on a mission to simplify healthcare and ensure that every individual has access to quality care. As a leading digital healthcare platform, we connect millions of patients with healthcare providers, making healthcare services more accessible and efficient. Join our dynamic team and contribute to transforming the future of healthcare. Job Overview: Practo is looking for a skilled Site Reliability Engineer (SRE) to join our team. The SRE will play a critical role in maintaining the reliability, performance, and scalability of our services. This role involves working with cloud platforms such as AWS, Azure, and Oracle, managing Ubuntu-based systems, and ensuring seamless operation of our infrastructure. The ideal candidate will have a strong background in system administration, cloud technologies, and modern DevOps practices. Key Responsibilities: Infrastructure Management: Design, implement, and manage scalable, resilient, and secure infrastructure on cloud providers such as AWS, Azure, and Oracle. Oversee the administration of Ubuntu servers, ensuring optimal performance and uptime. Automation and Monitoring: Implement monitoring and alerting systems to proactively identify and resolve issues before they impact users. Automate repetitive tasks to improve system reliability and operational efficiency. Containerization and Orchestration: Deploy and manage containerized applications using Docker. Utilize Kubernetes for container orchestration, ensuring efficient and reliable application deployment and scaling. Performance Optimization: Analyze system performance metrics and optimize infrastructure to meet performance targets. Troubleshoot and resolve issues related to server performance, network latency, and other system bottlenecks. Collaboration and Support: Work closely with development teams to ensure new applications and features are designed with reliability and scalability in mind. Provide guidance and mentorship to junior engineers on best practices for system reliability and cloud management. Participate in on-call rotations to provide 24/7 support for critical issues. Security and Compliance: Implement security best practices across all infrastructure components, including firewalls, VPNs, and access controls. Ensure compliance with industry standards and internal policies for data protection and privacy. Technical Skills: Proven experience with cloud providers: AWS, Azure, and Oracle. Strong proficiency in managing and troubleshooting Ubuntu operating systems. Hands-on experience with Nginx, Kubernetes, and Docker. Familiarity with scripting languages (e.g., Bash, Python) for automation tasks. Experience with CI/CD pipelines and tools like Jenkins, GitLab CI, or equivalent. Knowledge of networking fundamentals and security best practices. Professional Experience: 2+ years of experience in a Site Reliability Engineer or similar role. Excellent problem-solving skills and attention to detail. Strong communication skills, with the ability to collaborate effectively with cross-functional teams. Self-motivated with the ability to work independently and as part of a team.
Posted 2 months ago
3 - 5 years
5 - 7 Lacs
Pune
Work from Office
about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. SENIOR CLOUD SITE RELIABILITY ENGINEER ZS's CCoE (Cloud Center of Excellence) Team builds, maintains and helps architect the systems enabling ZS client-facing software solutions. We define and implement best practices to ensure performant, resilient and secure cloud solutions. The CCoE team at ZS is comprised of analytical problem solvers coming from diverse backgrounds while sharing a passion for quality delivery whether our customer is a client or another ZS employee. The team has presence in ZSs Evanston, Illinois and Pune, India offices. What You'll Do Acting as a S enior Cloud S ite Reliability Engineer , you will be working with a team of operations engineers and software developers to analyze, maintain and nurture our Cloud solutions/products to support the ever-growing companys clientele . As a technical expert, you will be working closely with various teams to ensure the stability of the environment by: Analyzing the current state, designing appropriate solutions and working with the team to implement them. Coordinate emergency responses, perform root cause analysis, identify and implement solutions to prevent re-occurrences Work with the team to identify ways to increase MTBF and lower MTTR for the environment Review each entire application stack and execute initiatives to reduce failures, defects and issues with the overall performance Identifying and working with the team to implement more efficient system procedures Maintaining environment monitoring systems to provide the best visibility into the state of the deployed products/solutions Perform root cause analysis on incoming infrastructure alerts and work with teams to resolve them Maintaining performance analysis tools, identifying any adverse changes to performance and working with the teams to resolve them Researching industry trends and technologies, and promote adoption of best-in-class tools and technologies Taking the initiative to advance the quality, performance, or scalability of our Cloud Solutions, by influencing the architecture or design of our products Design, develop and execute automated tests to validate solutions and environments Troubleshoot issues across the entire stack infrastructure, software, application and network What You'll Bring 3+ years experience working as a Site Reliability Engineer or an equivalent position 2+ years experience with AWS cloud technologies and at least one AWS certifications is required (Solution Architect / DevOps Engineer) 1+ years experience functioning as a senior member in an infrastructure/software team Hands-on experience with AWS services like EC2, RDS, EMR, CloudFront, ELB,API Gateway, CodeBuild, AWS Config, Systems Manager, ServiceCatalog, Lambda, etc. Full-stack IT experience with *nix, Windows, network/firewall concepts, source control (BitBucket) and build/dependency management and continuous integration systems (TeamCity, Jenkins) Expertise in at least one scripting language, Python preferred Must have firm understanding of application reliability, performance tuning and scalability
Posted 2 months ago
10 - 12 years
32 - 37 Lacs
Bengaluru
Work from Office
YOUR IMPACT: The Senior Site Reliability Engineer (SRE) will be responsible for ensuring the availability, reliability, and scalability of cloud infrastructure and services. This role focuses on automation, performance optimization, incident response, and CI/CD pipeline management to support highly available and resilient applications. The ideal candidate will bring deep expertise in AWS, Kubernetes, GitLab CI/CD, and Infrastructure as Code (IaC). WHAT THE ROLE OFFERS: Cloud Infrastructure & Reliability Engineering Architect, deploy, and maintain highly available and scalable cloud environments in AWS. Design and manage Kubernetes clusters (EKS) and containerized applications with Docker. Implement auto-scaling, load balancing, and fault tolerance for cloud services. Develop and optimize Infrastructure as Code (IaC) using Terraform, Tofu, or Ansible. CI/CD & Automation Design, implement, and maintain CI/CD pipelines using GitLab CI/CD and ArgoCD. Automate deployment workflows, infrastructure provisioning, and release management. Ensure secure, compliant, and automated software delivery across multiple environments. Monitoring, Incident Response & Performance Optimization Implement observability and monitoring using tools like CloudWatch, Prometheus, Grafana, ELK, or Datadog. Analyze system performance, detect anomalies, and optimize cloud resource utilization. Drive incident response and root cause analysis, ensuring fast recovery (MTTR) and minimal downtime. Establish Service Level Objectives (SLOs) and error budgets to maintain system health. Security & Compliance Implement security best practices, including IAM policies, encryption, network security, and vulnerability scanning. Automate patch management and security updates for cloud infrastructure. Ensure compliance with industry standards and regulations (SOC2, ISO27001, HIPAA, etc.). Collaboration & Leadership Work closely with DevOps, security, and development teams to drive reliability best practices. Lead blameless postmortems and continuously improve operational processes. Provide mentorship and training to junior engineers on SRE principles and cloud best practices. Participate in on-call rotations, ensuring 24/7 reliability of production services. WHAT YOU NEED TO SUCCEED: Bachelors degree in Computer Science, Engineering, or equivalent experience. 10-12 years of experience in Site Reliability Engineering (SRE), DevOps, or Cloud Engineering. Expertise in AWS Cloud Hands-on experience with EC2, VPC, RDS, S3, IAM, Lambda, and EKS. Strong Kubernetes knowledge Hands-on experience with EKS, Helm charts, and cluster management. CI/CD experience Proficiency in GitLab CI/CD, ArgoCD for automated software deployments. Infrastructure as Code (IaC) Experience with Terraform, Tofu Monitoring & Logging Familiarity with CloudWatch, Prometheus, Grafana, ELK, or Datadog. Scripting & Automation Proficiency in Python, Shell scripting, or Golang. Incident Management & Reliability Practices Experience with SLOs, SLIs, error budgets, and chaos engineering.
Posted 2 months ago
7 - 12 years
9 - 14 Lacs
Hyderabad
Work from Office
As an SRE , you will work with AWS, Kubernetes, Jenkins, and GitLab CI/CD to drive automation, monitoring, incident response, and performance improvements . You will contribute to both operational excellence and strategic initiatives in cloud reliability and security. WHAT THE ROLE OFFERS: Cloud Infrastructure & Reliability Engineering Ensure 99.99% uptime and reliability of TDR services through proactive monitoring and optimizations. Architect, deploy, and manage AWS cloud environments including EC2, S3, RDS, EKS, IAM, Lambda, and CloudFormation. Manage and optimize Kubernetes (EKS) clusters and containerized applications using Docker and Helm. Improve Infrastructure as Code (IaC) using Terraform, Ansible, or CloudFormation to automate cloud deployments. CI/CD & Automation Develop and maintain CI/CD pipelines in Jenkins, GitLab CI/CD, or ArgoCD for seamless software delivery. Automate infrastructure provisioning, deployments, and operational workflows . Ensure zero-downtime deployments and efficient release management. On-Call Responsibilities & Incident Management Participate in a 24/7 on-call rotation , ensuring rapid response to incidents. Investigate, diagnose, and resolve production incidents while minimizing downtime (MTTR). Conduct blameless postmortems and implement fixes to prevent future incidents. Improve SLI/SLO monitoring , alerting mechanisms, and automate incident remediation. Monitoring & Performance Optimization Implement and optimize monitoring, logging, and alerting using CloudWatch, Prometheus, Grafana, ELK, or Datadog. Enhance observability to detect anomalies and improve system performance. Optimize infrastructure costs and implement auto-scaling strategies for efficient resource utilization. Security & Compliance Ensure security best practices, including IAM policies, encryption, and network security . Automate security compliance (SOC2, ISO27001, HIPAA) and vulnerability management. Regularly patch, audit, and secure cloud environments . Collaboration & Leadership Work cross-functionally with DevOps, security, and development teams to drive reliability best practices. Mentor and coach junior engineers on SRE principles, automation, and cloud reliability . Contribute to team growth by improving operational workflows, documentation, and training . Key KPIs Contributed by this Role Uptime & Reliability: Maintain high availability (99.99%) of TDR services. Incident Resolution: Reduce MTTR (Mean Time to Resolution) through automation and improved response times. Automation & Efficiency: Enhance operational efficiency by implementing self-healing, auto-scaling, and auto-remediation . Cost Optimization: Optimize cloud spending through scalable, right-sized infrastructure . Deployment Success: Support seamless infrastructure and CI/CD-driven production deployments . WHAT YOU NEED TO SUCCEED: 7-12 years of experience in Site Reliability Engineering (SRE), DevOps, or Cloud Engineering . Expertise in AWS Cloud Hands-on experience with EC2, VPC, RDS, S3, IAM, Lambda, and EKS. Kubernetes & Containers Experience managing EKS, Helm charts, Docker, and container orchestration. CI/CD & Automation Proficiency in Jenkins, GitLab CI/CD, or ArgoCD for deployment automation. Infrastructure as Code (IaC) Strong knowledge of Terraform, Ansible, or CloudFormation. Monitoring & Logging Familiarity with CloudWatch, Prometheus, Grafana, ELK, or Datadog. Scripting & Automation Experience in Python, Shell scripting, or Golang. Incident Management & Reliability Best Practices Strong understanding of SLOs, SLIs, error budgets, and chaos engineering
Posted 2 months ago
7 - 12 years
8 - 14 Lacs
Delhi NCR, Mumbai, Bengaluru
Work from Office
Job Summary : We are seeking an experienced Neo4j Engineer with deep expertise in graph databases to join our team. The ideal candidate will design, develop, and deploy applications using Neo4j as the primary backend, while also working on the architecture of large-scale data environments. This role involves working with containerized microservices, leveraging AWS, and optimizing performance to deliver robust and scalable solutions. Key Responsibilities : - Neo4j Application Development - Design and develop applications that utilize Neo4j as the primary backend database. - Build and optimize graph database models for efficient querying and data representation. - Microservices Architecture - Develop and deploy containerized microservices using Java, Docker, and Kubernetes to enhance scalability and maintainability. - Contribute to the development of cloud-native applications with a focus on Python and Java. - AWS Deployment and Management - Utilize AWS services (e.g., EC2, ECS) to manage and deploy applications in the cloud, ensuring high availability and performance. - Implement best practices for secure, scalable, and resilient cloud environments. - Performance Optimization and Troubleshooting - Optimize Neo4j queries and configurations for handling large-scale data environments, ensuring efficiency and speed. - Monitor and troubleshoot Neo4j databases, performing migrations and ensuring data integrity across environments. - Data Architecture and Modeling - Contribute to the architecture and design of graph data models to support application needs. - Stay updated on best practices, tools, and advancements in graph database technology. - Cross-functional Collaboration - Collaborate with data scientists, engineers, and stakeholders to align Neo4j data models with application requirements. Required Skills and Experience : - 10+ years of experience in software engineering, with a strong focus on Neo4j and graph databases. - Expertise in Neo4j database design, data modeling, and graph querying. - Proficient in Java and Python programming for developing cloud-native applications. - Strong experience with containerization tools like Docker and orchestration platforms like Kubernetes. - Experience deploying and managing applications on AWS (EC2, ECS, RDS, etc.). - Demonstrated ability to optimize and troubleshoot Neo4j databases in large-scale environments. Preferred Qualifications : - Neo4j Certification is highly desirable. - Familiarity with CI/CD processes, automation tools, and DevOps best practices. - Knowledge of additional cloud platforms like GCP or Azure. Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 2 months ago
8 - 13 years
15 - 25 Lacs
Chennai
Remote
Qualifications / Skillsets Proficiency in cloud architecture design principles, patterns, and best practices. Understanding of and experience with configuring security, testing for reliability and performance tuning on AWS. Hands-on experience with range of AWS services (EC2, S3, EBS, EMP etc.) and container-based architecture Development of solutions (Java/.Net/ Python) and server-less computing (eg. AWS Lambda, Step Functions). Strong understanding of containerization technologies (e.g., Docker, Kubernetes, EKS) System security and Cloud network security (VPC, IAM, KMS etc.) Design / implementation of DevOps pipelines Proven expertise in automating Integration and Deployment pipelines. Experience with infrastructure as code (IaC) tools such as Terraform, CloudFormation, or Ansible. Experience with hybrid cloud environments and cloud migration strategies. Good interpersonal skills, diplomacy and adaptability. Ability to take discretion and committed sense of customer service are essential Hands on Production Experience in Linux or Window System Engineering Hands on Production Experience with AWS Compute Service: EC2, AMI, Lambda, Autoscaling, Load Balancers, Spot Instances Hands on Production Experience with AWS Storage Service: S3, EFS, EBS, Glacier, Storage Gateway Hands on Production Experience with AWS Security Service: IAM, AWS Config, Cloud Trail, WAF, KMS Hands on Production Experience with AWS Network Service: VPC, Subnets, Transit Gateway, VPN, VPC Endpoint Hands on Production Experience with AWS observability Service: CloudWatch Alarms, CloudWatch Logs, Cloud Trail, VPC Flow Logs, ECS/EKS Enhanced Monitoring AWS solution architect certification required
Posted 3 months ago
3 - 5 years
3 - 4 Lacs
Chennai
Work from Office
Planning & designing the cloud infrastructure with AWS Technical Exp on Cloud & Datacenter technologies including Private & Public Cloud Deploying new cloud-based solutions like Ec2, VPC, VPN, EFS, FSX, S3, SNS, Cloud Watch & SQS Call 7397778272
Posted 3 months ago
2 - 5 years
4 - 7 Lacs
Hyderabad
Work from Office
Hands-on experience in Apache NiFi for data integration and workflow automation. Senior-level Java programming knowledge+ including experience in developing custom NiFi processors and extensions. Strong knowledge of cloud platforms (e.g.+ AWS+ Azure+ GCP) and their data services (e.g.+ S3+ EC2+ Lambda+ Azure Data Lake+ etc.). Proficiency in Linux environments+ including shell scripting and system administration. Experience with Apache Kafka for real-time data streaming and event-driven architectures. Hands-on experience with MongoDB for NoSQL data management. Familiarity with Goldengate for real-time data replication and integration. Experience in performance tuning and optimization of NiFi workflows. Solid understanding of data engineering concepts+ including ETL+ELT+ data lakes+ and data warehouses. Ability to work independently and deliver results in a fast-paced+ high-pressure environment. Excellent problem-solving+ debugging+ and analytical skills. Good-to-Have Skills: Experience with containerization tools like Docker and Kubernetes. Knowledge of DevOps practices and CI+CD pipelines. Familiarity with big data technologies like Hadoop+ Spark+ or Kafka. Understanding of security best practices for data pipelines and cloud environments.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2