Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: JMeter, Selenium, Automation Anywhere, API Testing, UI Testing, Java, Python, Golang Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection Framework team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Developing expertise in our cloud security solutions, and using that expertise and your experience to help design and qualify the solution as a whole Contributing to building a flexible and scalable automation solution Working closely with the development and design team to help create an amazing user experience Helping to create and implement quality processes and requirements Working closely with the team to replicate customer environments Automating complex test suites Developing test libraries and coordinating their adoption. Identifying and communicating risks about our releases. Owning and making quality decisions for the solution. Owing the release and being a customer advocate. Required Skills And Experience 8+ years of experience in the field of SDET and a track record showing that you are a highly motivated individual, capable of coming up with creative, innovative and working solutions in a collaborative environment Strong Java and/or Python programming skills. (Go a plus) Knowledge of Jenkins, Hudson, or any other CI systems. Experience testing distributed systems A proponent of Strong Quality Engineering methodology. Strong knowledge of linux systems, Docker, k8s Experience building automation frameworks Experience with Databases, SQL and NoSQL (MongoDB or Cassandra) a plus Knowledge of network security, authentication and authorization. Comfortable with ambiguity and taking the initiative regarding issues and decisions Proven ability to apply data structures and algorithms to practical problems. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 19 hours ago
5.0 years
0 Lacs
India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Python, Golang, AWS, Google Cloud, Azure, MongoDB, PostgreSQL, Yugabyte, AuroraDB Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What you will be doing Architect and implement critical software infrastructure for distributed large-scale multi-cloud environments. Review architectures and designs across the organization to help guide other engineers to build scalable cloud services. Provide technical leadership and strategic direction for large-scale distributed cloud-native solutions. Be a catalyst for improving engineering processes and ownership. Research, incubate, and drive new technologies to ensure we are leveraging the latest innovations. Required Skills And Experience 5 to 15 years of experience in the field of software development Excellent programming experience with Go, C/C++, Java, Python Experience building and delivering cloud microservices at scale Expert understanding of distributed systems, data structures, and algorithms A skilled problem solver well-versed in considering and making technical tradeoffs A strong communicator who can quickly pick up new concepts and domains Bonus points for Golang knowledge Production experience with building, deploying and managing microservices in Kubernetes or similar technologies is a bonus Production experience with Cloud-native concepts and technologies related to CI/CD, orchestration (e.g. Helm charts), observability (e.g. Prometheus, Opentracing), distributed databases, messaging (REST, gRPC) is a bonus Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 19 hours ago
8.0 years
0 Lacs
India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: JMeter, Selenium, Automation Anywhere, API Testing, UI Testing, Java, Python, Golang Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection Framework team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Developing expertise in our cloud security solutions, and using that expertise and your experience to help design and qualify the solution as a whole Contributing to building a flexible and scalable automation solution Working closely with the development and design team to help create an amazing user experience Helping to create and implement quality processes and requirements Working closely with the team to replicate customer environments Automating complex test suites Developing test libraries and coordinating their adoption. Identifying and communicating risks about our releases. Owning and making quality decisions for the solution. Owing the release and being a customer advocate. Required Skills And Experience 8+ years of experience in the field of SDET and a track record showing that you are a highly motivated individual, capable of coming up with creative, innovative and working solutions in a collaborative environment Strong Java and/or Python programming skills. (Go a plus) Knowledge of Jenkins, Hudson, or any other CI systems. Experience testing distributed systems A proponent of Strong Quality Engineering methodology. Strong knowledge of linux systems, Docker, k8s Experience building automation frameworks Experience with Databases, SQL and NoSQL (MongoDB or Cassandra) a plus Knowledge of network security, authentication and authorization. Comfortable with ambiguity and taking the initiative regarding issues and decisions Proven ability to apply data structures and algorithms to practical problems. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 19 hours ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Python, Golang, AWS, Google Cloud, Azure, MongoDB, PostgreSQL, Yugabyte, AuroraDB Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What you will be doing Architect and implement critical software infrastructure for distributed large-scale multi-cloud environments. Review architectures and designs across the organization to help guide other engineers to build scalable cloud services. Provide technical leadership and strategic direction for large-scale distributed cloud-native solutions. Be a catalyst for improving engineering processes and ownership. Research, incubate, and drive new technologies to ensure we are leveraging the latest innovations. Required Skills And Experience 5 to 15 years of experience in the field of software development Excellent programming experience with Go, C/C++, Java, Python Experience building and delivering cloud microservices at scale Expert understanding of distributed systems, data structures, and algorithms A skilled problem solver well-versed in considering and making technical tradeoffs A strong communicator who can quickly pick up new concepts and domains Bonus points for Golang knowledge Production experience with building, deploying and managing microservices in Kubernetes or similar technologies is a bonus Production experience with Cloud-native concepts and technologies related to CI/CD, orchestration (e.g. Helm charts), observability (e.g. Prometheus, Opentracing), distributed databases, messaging (REST, gRPC) is a bonus Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 19 hours ago
5.0 years
0 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Python, Golang, AWS, Google Cloud, Azure, MongoDB, PostgreSQL, Yugabyte, AuroraDB Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What you will be doing Architect and implement critical software infrastructure for distributed large-scale multi-cloud environments. Review architectures and designs across the organization to help guide other engineers to build scalable cloud services. Provide technical leadership and strategic direction for large-scale distributed cloud-native solutions. Be a catalyst for improving engineering processes and ownership. Research, incubate, and drive new technologies to ensure we are leveraging the latest innovations. Required Skills And Experience 5 to 15 years of experience in the field of software development Excellent programming experience with Go, C/C++, Java, Python Experience building and delivering cloud microservices at scale Expert understanding of distributed systems, data structures, and algorithms A skilled problem solver well-versed in considering and making technical tradeoffs A strong communicator who can quickly pick up new concepts and domains Bonus points for Golang knowledge Production experience with building, deploying and managing microservices in Kubernetes or similar technologies is a bonus Production experience with Cloud-native concepts and technologies related to CI/CD, orchestration (e.g. Helm charts), observability (e.g. Prometheus, Opentracing), distributed databases, messaging (REST, gRPC) is a bonus Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 20 hours ago
5.0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: Java, Python, Golang, AWS, Google Cloud, Azure, MongoDB, PostgreSQL, Yugabyte, AuroraDB Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What you will be doing Architect and implement critical software infrastructure for distributed large-scale multi-cloud environments. Review architectures and designs across the organization to help guide other engineers to build scalable cloud services. Provide technical leadership and strategic direction for large-scale distributed cloud-native solutions. Be a catalyst for improving engineering processes and ownership. Research, incubate, and drive new technologies to ensure we are leveraging the latest innovations. Required Skills And Experience 5 to 15 years of experience in the field of software development Excellent programming experience with Go, C/C++, Java, Python Experience building and delivering cloud microservices at scale Expert understanding of distributed systems, data structures, and algorithms A skilled problem solver well-versed in considering and making technical tradeoffs A strong communicator who can quickly pick up new concepts and domains Bonus points for Golang knowledge Production experience with building, deploying and managing microservices in Kubernetes or similar technologies is a bonus Production experience with Cloud-native concepts and technologies related to CI/CD, orchestration (e.g. Helm charts), observability (e.g. Prometheus, Opentracing), distributed databases, messaging (REST, gRPC) is a bonus Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 20 hours ago
8.0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: JMeter, Selenium, Automation Anywhere, API Testing, UI Testing, Java, Python, Golang Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection Framework team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Developing expertise in our cloud security solutions, and using that expertise and your experience to help design and qualify the solution as a whole Contributing to building a flexible and scalable automation solution Working closely with the development and design team to help create an amazing user experience Helping to create and implement quality processes and requirements Working closely with the team to replicate customer environments Automating complex test suites Developing test libraries and coordinating their adoption. Identifying and communicating risks about our releases. Owning and making quality decisions for the solution. Owing the release and being a customer advocate. Required Skills And Experience 8+ years of experience in the field of SDET and a track record showing that you are a highly motivated individual, capable of coming up with creative, innovative and working solutions in a collaborative environment Strong Java and/or Python programming skills. (Go a plus) Knowledge of Jenkins, Hudson, or any other CI systems. Experience testing distributed systems A proponent of Strong Quality Engineering methodology. Strong knowledge of linux systems, Docker, k8s Experience building automation frameworks Experience with Databases, SQL and NoSQL (MongoDB or Cassandra) a plus Knowledge of network security, authentication and authorization. Comfortable with ambiguity and taking the initiative regarding issues and decisions Proven ability to apply data structures and algorithms to practical problems. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 20 hours ago
8.0 years
0 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: JMeter, Selenium, Automation Anywhere, API Testing, UI Testing, Java, Python, Golang Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection Framework team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Developing expertise in our cloud security solutions, and using that expertise and your experience to help design and qualify the solution as a whole Contributing to building a flexible and scalable automation solution Working closely with the development and design team to help create an amazing user experience Helping to create and implement quality processes and requirements Working closely with the team to replicate customer environments Automating complex test suites Developing test libraries and coordinating their adoption. Identifying and communicating risks about our releases. Owning and making quality decisions for the solution. Owing the release and being a customer advocate. Required Skills And Experience 8+ years of experience in the field of SDET and a track record showing that you are a highly motivated individual, capable of coming up with creative, innovative and working solutions in a collaborative environment Strong Java and/or Python programming skills. (Go a plus) Knowledge of Jenkins, Hudson, or any other CI systems. Experience testing distributed systems A proponent of Strong Quality Engineering methodology. Strong knowledge of linux systems, Docker, k8s Experience building automation frameworks Experience with Databases, SQL and NoSQL (MongoDB or Cassandra) a plus Knowledge of network security, authentication and authorization. Comfortable with ambiguity and taking the initiative regarding issues and decisions Proven ability to apply data structures and algorithms to practical problems. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 20 hours ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 8.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Netskope) What do you need for this opportunity? Must have skills required: JMeter, Selenium, Automation Anywhere, API Testing, UI Testing, Java, Python, Golang Netskope is Looking for: About Netskope Today, there's more data and users outside the enterprise than inside, causing the network perimeter as we know it to dissolve. We realized a new perimeter was needed, one that is built in the cloud and follows and protects data wherever it goes, so we started Netskope to redefine Cloud, Network and Data Security. Since 2012, we have built the market-leading cloud security company and an award-winning culture powered by hundreds of employees spread across offices in Santa Clara, St. Louis, Bangalore, London, Paris, Melbourne, Taipei, and Tokyo. Our core values are openness, honesty, and transparency, and we purposely developed our open desk layouts and large meeting spaces to support and promote partnerships, collaboration, and teamwork. From catered lunches and office celebrations to employee recognition events and social professional groups such as the Awesome Women of Netskope (AWON), we strive to keep work fun, supportive and interactive. Visit us at Netskope Careers. Please follow us on LinkedIn and Twitter@Netskope. About The Role Please note, this team is hiring across all levels and candidates are individually assessed and appropriately leveled based upon their skills and experience. Netskope's API Protection Framework team is responsible for designing and implementing a scalable and elastic architecture to provide protection for enterprise SaaS and IaaS application data. This is achieved by ingesting high volume activity events at near real-time and analyzing data to provide security risk management for our customers, including data security, access control, threat prevention, data loss prevention, user coaching and more. What’s In It For You As a member of this team you will work in an innovative, fast-paced environment with other experts to build Cloud-Native solutions using technologies like Kubernetes, Helm, Prometheus, Grafana, Jaeger (open tracing), persistent messaging queues, SQL/NO-SQL databases, key-value stores, etc. You will solve complex scale problems, and deploy and manage the solution in production. If you are driven by high-quality, high-velocity software delivery challenges, and using innovative and cutting edge solutions to achieve these goals, we would like to speak with you. What You Will Be Doing Developing expertise in our cloud security solutions, and using that expertise and your experience to help design and qualify the solution as a whole Contributing to building a flexible and scalable automation solution Working closely with the development and design team to help create an amazing user experience Helping to create and implement quality processes and requirements Working closely with the team to replicate customer environments Automating complex test suites Developing test libraries and coordinating their adoption. Identifying and communicating risks about our releases. Owning and making quality decisions for the solution. Owing the release and being a customer advocate. Required Skills And Experience 8+ years of experience in the field of SDET and a track record showing that you are a highly motivated individual, capable of coming up with creative, innovative and working solutions in a collaborative environment Strong Java and/or Python programming skills. (Go a plus) Knowledge of Jenkins, Hudson, or any other CI systems. Experience testing distributed systems A proponent of Strong Quality Engineering methodology. Strong knowledge of linux systems, Docker, k8s Experience building automation frameworks Experience with Databases, SQL and NoSQL (MongoDB or Cassandra) a plus Knowledge of network security, authentication and authorization. Comfortable with ambiguity and taking the initiative regarding issues and decisions Proven ability to apply data structures and algorithms to practical problems. Education BSCS Or Equivalent Required, MSCS Or Equivalent Strongly Preferred How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 20 hours ago
10.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
About the Role: As the SRE Architect for Flipkart’s Reliability & Productivity Charter, you will own the vision and strategic roadmap for our Reliability charter—defining what “resilient at scale” means for Flipkart and how we measure success. You will architect and drive key platform initiatives including: ● Centralized Observability Stack: End-to-end design of metrics, tracing, logging, and alerting pipelines to give every engineering team a single pane of glass into system health. ● Public Cloud Management: Define best practices, guardrails, and automation for Flipkart’s multi-region GCP footprint to ensure cost-effective, secure, and compliant operations. ● SRE Platform Innovations: Lead the architecture of chaos engineering (Chaos Platform), mass code migration (CodeLift with OpenRewrite), golden-image enforcement and artifact scanning (ImageScanning), and other next-generation reliability tools. In this role, you will collaborate closely with engineering, product, and operations stakeholders to translate high-level reliability objectives into concrete, scalable systems and processes that empower thousands of engineers to build, deploy, and operate Flipkart’s services with confidence. About Flipkart’s Reliability & Productivity Charter Join a dynamic SRE team focused on elevating Flipkart’s platform resilience, developer productivity, and operational excellence. We build and own the platforms and tooling that enable thousands of engineers to deliver high-quality features at scale and with confidence. Key Responsibilities ● Architect & Design ○ Define the end-to-end architecture for centralized observability (metrics, tracing, logs, alerting) and ensure scalability, security, and cost-efficiency ○ Drive the technical roadmap for platforms such as Chaos Platform, CodeLift, and Image Scanning ○ Establish best-practice patterns (golden paths) for multi-region, multi-cloud deployments aligned with BCP/DR requirements ● Platform Delivery & Governance ○ Lead cross-functional design reviews, proof-of-concepts, and production rollouts for new platform components ○ Ensure robust standards for API design, data modeling, and service-level objectives (SLOs) ○ Define and enforce policy as code (e.g., quota management, image enforcement, CI/CD pipelines) ● Technology Leadership & Mentorship ○ Coach and guide SRE Engineers and Platform Engineers on system design, reliability patterns, and performance optimizations ○ Evangelize “shift-left” practices: resilience testing, security scanning (Snyk, Artifactory integration), and automated feedback loops ○ Stay abreast of industry trends (service meshes, event stores, distributed tracing backends) and evaluate their applicability ● Performance & Capacity Planning ○ Collaborate with FinanceOps and CloudOps to optimize public cloud cost, capacity, and resource utilization ○ Define monitoring, alerting, and auto-remediation strategies to maintain healthy error budgets What We’re Looking For ● Experience & Expertise ○ 10+ years in large-scale distributed systems architecture, with at least 3 years in an SRE or platform engineering context ○ Hands-on mastery of observability stacks (Prometheus, OpenTelemetry, Jaeger/Zipkin, ELK/EFK, Grafana, Alertmanager) ○ Proven track record of designing chaos engineering frameworks and non-functional testing workflows ● Technical Skills ○ Deep knowledge of public cloud platforms (GCP preferred), container orchestration (Kubernetes), and IaC (Terraform, Helm) ○ Strong background in language-agnostic tooling (Go, Java, Python) and API-driven microservices architectures ○ Familiarity with OpenRewrite for mass code migration and vulnerability management tools (Snyk, Trivy) ● Leadership & Collaboration ○ Demonstrated ability to influence stakeholders across engineering, product, and operations teams ○ Excellent written and verbal communication—able to translate complex architectures into clear, actionable plans ○ Passion for mentoring and growing engineering talent in reliability and productivity best practice
Posted 1 day ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role/Job Title: Senior Developer Function/ Department: Information Technology Job Purpose: As a Backend Developer, you will play a crucial role in designing, developing, and maintaining complex backend systems. You will work closely with cross-functional teams to deliver high-quality software solutions and drive the technical direction of our projects. Your experience and expertise will be vital in ensuring the performance, scalability, and reliability of our applications. Key Responsibilities: Design and Develop: Architect, design, and implement high-performance Java/Golang-based backend services and applications. Code Quality: Write clean, efficient, and well-documented code following industry best practices and coding standards. Technical Leadership: Provide technical guidance and mentorship to junior developers, promoting best practices and fostering a collaborative environment. Collaboration: Work closely with frontend developers, product managers, and other stakeholders to understand requirements and deliver robust solutions. Performance Optimization: Identify and resolve performance bottlenecks and scalability issues. Testing: Implement comprehensive testing strategies, including unit tests, integration tests, and end-to-end tests. Continuous Improvement: Stay current with the latest industry trends, technologies, and best practices in Java/Golang development, and continuously improve our development processes. Primary Skills: 4+ Years of professional experience in Java/Golang backend development. Expert proficiency in Java/Golang and related frameworks (e.g., Spring, Spring Boot,Gin). Extensive experience with RESTful API design and development. Strong knowledge of database technologies, including SQL, MySQL, PostgreSQL, or NoSQL databases. Deep understanding of object-oriented programming principles and design patterns. Experience with version control systems (e.g., Git). Familiarity with microservices architecture and cloud platforms (e.g., AWS, Azure, Google Cloud). Familiarity with GraphQL. Experience with CI/CD pipelines and tools (e.g., Jenkins, Docker). Proficiency in unit testing frameworks. Secondary Skills: Familiarity with Jaeger for monitoring and tracing. Experience with containerization and orchestration tools (e.g., Kubernetes). Familiarity with agile development methodologies. Knowledge of security best practices and secure coding principles. Excellent problem-solving and analytical skills. Strong communication and collaboration abilities. Ability to work independently and manage multiple tasks effectively. Education Qualification: Graduation: Bachelor of Science (B.Sc) / Bachelor of Technology (B.Tech) / Bachelor of Computer Applications (BCA) Post-Graduation: Master of Science (M.Sc) /Master of Technology (M.Tech) / Master of Computer Applications (MCA)
Posted 1 day ago
12.0 - 16.0 years
0 Lacs
karnataka
On-site
Join us as a Performance Testing Specialist in this key role where you will be responsible for undertaking and enabling automated testing activities in all delivery models. You will have the opportunity to support teams in developing quality solutions and ensuring continuous integration for defect-free deployment of customer value. Working in a fast-paced environment, you will gain exposure by closely collaborating with various teams across the bank. This position is offered at the vice president level. As a Quality Automation Specialist, you will play a crucial role in transforming testing processes by utilizing quality processes, tools, and methodologies to enhance control, accuracy, and integrity. Your responsibilities will include ensuring that new sprint deliveries within a release cycle continue to meet Non-Functional Requirements (NFRs) such as response time, throughput rate, and resource consumption. In this collaborative role, you will lead debugging sessions with software providers, hardware providers, and internal teams to investigate findings and develop solutions. Additionally, you will evolve predictive and intelligent testing approaches based on automation and innovative testing products and solutions. You will work closely with your team to define and refine the scope of manual and automated testing, create automated test scripts, user documentation, and artifacts. Your decision-making process will be data-driven, focusing on return on investment and value measures that reflect thoughtful cost management. You will also play a key role in enabling the cross-skilling of colleagues in end-to-end automation testing. To excel in this role, you should have a minimum of twelve years of experience in automated testing, particularly in an Agile development or Continuous Integration/Continuous Delivery (CI/CD) environment. Proficiency in performance testing tools such as LoadRunner, Apache JMeter, or Neoload is essential, as well as experience in AWS EKS containers and microservices architecture. Familiarity with monitoring and analyzing performance tests using tools like Grafana, Jaeger, and Graylog is also required. Moreover, we are seeking candidates with expertise in end-to-end and automation testing using the latest tools recommended by the enterprise tooling framework. A background in designing, developing, and implementing automation frameworks in new environments is highly desirable. Effective communication skills to convey complex technical concepts to management-level colleagues and strong collaboration and stakeholder management skills are essential for success in this role.,
Posted 2 days ago
4.0 - 12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Title : Google Cloud DevOps Engineer Location : PAN India The Opportunity: Publicis Sapient is looking for a Cloud & DevOps Engineer to join our team of bright thinkers and enablers. You will use your problem-solving skills, craft & creativity to design and develop infrastructure interfaces for complex business applications. Contribute ideas for improvements in Cloud and DevOps practices, delivering innovation through automation. We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions. Your Impact OR Responsibilities: Combine your technical expertise and problem-solving passion to work closely with clients, turning complex ideas into end-to-end solutions that transform our clients’ business. Lead and support the implementation of Engineering side of Digital Business Transformations with cloud, multi-cloud, security, observability and DevOps as technology enablers. Responsible for Building Immutable Infrastructure & maintain highly scalable, secure, and reliable cloud infrastructure, which is optimized for performance cost, and compliant with security standards to prevent security breaches Enable our customers to accelerate their software development lifecycle and reduce the time-to-market for their products or services. Your Skills & Experience: 4 to 12 years of experience in Cloud & DevOps with Full time Bachelor’s /Master’s degree (Science or Engineering preferred) Expertise in below DevOps & Cloud tools: GCP (Compute, IAM, VPC, Storage, Serverless, Database, Kubernetes, Pub-Sub, Operations Suit) Configuration and monitoring DNS, APP Servers, Load Balancer, Firewall for high volume traffic Extensive experience in designing, implementing, and maintaining infrastructure as code using preferably Terraform or Cloud Formation/ARM Templates/Deployment Manager/Pulumi Experience Managing Container Infrastructure (On Prem & Managed e.g., AWS ECS, EKS, or GKE) Design, implement and Upgrade container infrastructure e.g., K8S Cluster & Node Pools Create and maintain deployment manifest files for microservices using HELM Utilize service mesh Istio to create gateways, virtual services, traffic routing and fault injection Troubleshoot and resolve container infrastructure & deployment issues Continues Integration & Continues Deployment Develop and maintain CI/CD pipelines for software delivery using Git and tools such as Jenkins, GitLab, CircleCI, Bamboo and Travis CI Automate build, test, and deployment processes to ensure efficient release cycles and enforce software development best practices e.g., Quality Gates, Vulnerability Scans etc. Automate Build & Deployment process using Groovy, GO, Python, Shell, PowerShell Implement DevSecOps practices and tools to integrate security into the software development and deployment lifecycle. Manage artifact repositories such as Nexus and JFrog Artifactory for version control and release management. Design, implement, and maintain observability, monitoring, logging and alerting using below tools Observability: Jaeger, Kiali, CloudTrail, Open Telemetry, Dynatrace Logging: Elastic Stack (Elasticsearch, Logstash, Kibana), Fluentd, Splunk Monitoring: Prometheus, Grafana, Datadog, New Relic Good to Have: Associate Level Public Cloud Certifications Terraform Associate Level Certification Benefits of Working Here: Gender-Neutral Policy 18 paid holidays throughout the year for NCR/BLR (22 For Mumbai) Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Learn more about us at www.publicissapient.com or explore other career opportunities here
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Site Reliability Engineer III at JPMorgan Chase within the Corporate Technology - Capital Management, you play a crucial role in shaping the future of a globally recognized organization. Your impact is direct and significant in a sphere tailored for high achievers in site reliability. You will tackle complex and wide-ranging business challenges with simple and effective solutions through code and cloud infrastructure. Your responsibilities include configuring, maintaining, monitoring, and optimizing applications and associated infrastructure. You will independently break down and enhance existing solutions iteratively, making you a key contributor to your team. Your primary responsibilities involve driving continuous enhancement of reliability, monitoring, and alerting for mission-critical microservices. You will automate tasks to reduce manual effort, creating reliable infrastructure and tools to expedite feature development. By developing and implementing metrics for microservices, defining user journeys, SLOs, and error budgets, and configuring dashboards and alerts, you ensure blameless post-mortems for permanent incident closure. Collaboration with development teams throughout the software lifecycle is essential to enhance reliability and scale, design self-healing patterns, and implement infrastructure, configuration, and network as code. You will work closely with software engineers to design and implement deployment approaches using automated CI/CD pipelines and promote site reliability engineering best practices. Your role involves demonstrating and advocating for a site reliability culture and practices, leading initiatives to improve application and platform reliability and stability through data-driven analytics. Collaborating with team members to identify service level indicators, establish reasonable service level objectives, and proactively resolve issues before customer impact are critical aspects of your work. Additionally, you will act as the main point of contact during major incidents, utilizing technical expertise to swiftly identify and resolve issues while sharing knowledge within the organization. To excel in this role, you are required to have formal training or certification in site reliability concepts along with at least 5 years of applied experience in public cloud platforms like AWS, Azure, or GCP. Proficiency in a programming language such as Python, Go, or Java/Spring Boot is necessary, with expertise in software design, coding, testing, and delivery. Experience with Kubernetes, cloud computing, and relational databases like Oracle or MySQL is preferred. You should possess excellent debugging and troubleshooting skills and familiarity with common SRE toolchains like Grafana, Prometheus, ELK Stack, Kibana, and Jaeger. Proficiency in continuous integration and continuous delivery tools such as Jenkins, GitLab, or Terraform, and observability tools like Dynatrace, Datadog, New Relic, CloudWatch, or Splunk is also important. Moreover, your skills should include familiarity with ETL tools like Databricks, experience with container and container orchestration technologies such as ECS, Kubernetes, and Docker, and a deep proficiency in reliability, scalability, performance, security, enterprise system architecture, and toil reduction. You should be able to identify and solve problems related to complex data structures and algorithms, troubleshoot common networking technologies and issues, and be driven to self-educate and evaluate new technologies. Teaching new programming languages to team members, contributing to large and collaborative teams, recognizing roadblocks proactively, and showing interest in learning technology that drives innovation are further expectations of this role.,
Posted 3 days ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Role We are looking for an experienced DevOps Engineer to join our engineering team. This role involves setting up, managing, and scaling development, staging, and production environments both on AWS cloud and on-premise (open source stack) . You will be responsible for CI/CD pipelines, infrastructure automation, monitoring, container orchestration, and model deployment workflows for our enterprise applications and AI platform. Key Responsibilities Infrastructure Setup & Management Design and implement cloud-native architectures on AWS and be able to manage on-premise open source environments when required . Automate infrastructure provisioning using tools like Terraform or CloudFormation. Maintain scalable environments for dev, staging, and production . CI/CD & Release Management Build and maintain CI/CD pipelines for backend, frontend, and AI workloads. Enable automated testing, security scanning, and artifact deployments. Manage configuration and secret management across environments. Containerization & Orchestration Manage Docker-based containerization and Kubernetes clusters (EKS, self-managed K8s) . Implement service mesh, auto-scaling, and rolling updates. Monitoring, Security, and Reliability Implement observability (logging, metrics, tracing) using open source or cloud tools. Ensure security best practices across infrastructure, pipelines, and deployed services. Troubleshoot incidents, manage disaster recovery, and support high availability. Model DevOps / MLOps Set up pipelines for AI/ML model deployment and monitoring (LLMOps). Support data pipelines, vector databases, and model hosting for AI applications. Required Skills and Qualifications Cloud & Infra Strong expertise in AWS services : EC2, ECS/EKS, S3, IAM, RDS, Lambda, API Gateway, etc. Ability to set up and manage on-premise or hybrid environments using open source tools. DevOps & Automation Hands-on experience with Terraform / CloudFormation . Strong skills in CI/CD tools such as GitHub Actions, Jenkins, GitLab CI/CD, or ArgoCD. Containerization & Orchestration Expertise with Docker and Kubernetes (EKS or self-hosted). Familiarity with Helm charts, service mesh (Istio/Linkerd). Monitoring / Observability Tools Experience with Prometheus, Grafana, ELK/EFK stack, CloudWatch . Knowledge of distributed tracing tools like Jaeger or OpenTelemetry. Security & Compliance Understanding of cloud security best practices . Familiarity with tools like Vault, AWS Secrets Manager. Model DevOps / MLOps Tools (Preferred) Experience with MLflow, Kubeflow, BentoML, Weights & Biases (W&B) . Exposure to vector databases (pgvector, Pinecone) and AI pipeline automation . Preferred Qualifications Knowledge of cost optimization for cloud and hybrid infrastructures . Exposure to infrastructure as code (IaC) best practices and GitOps workflows. Familiarity with serverless and event-driven architectures . Education Bachelors degree in Computer Science, Engineering, or related field (or equivalent experience). What We Offer Opportunity to work on modern cloud-native systems and AI-powered platforms . Exposure to hybrid environments (AWS and open source on-prem). Competitive salary, benefits, and growth-oriented culture. Show more Show less
Posted 3 days ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Join Inito's DevOps team, playing a crucial role in building, maintaining, and scaling our cloud infrastructure and operational excellence. This role offers a unique opportunity to contribute across development and operations, streamlining processes, enhancing system reliability, and strengthening our security posture. You will work closely with engineering, data science, and other cross-functional teams in a fast-paced, growth-oriented environment. Responsibilities Assist in managing and maintaining cloud infrastructure on AWS, GCP, and on-premise compute (including bare-metal servers). Support and improve CI/CD pipelines, contributing to automated deployment processes. Contribute to automation efforts through scripting, reducing manual toil, and improving efficiency. Monitor system health and logs, assisting in troubleshooting and resolving operational issues. Develop a deep understanding of application working, including memory & disk usage patterns, database interactions, and overall resource consumption to ensure performance and stability. Participate in incident response and post-mortem analysis, contributing to faster resolution and preventing recurrence. Support the implementation and adherence to cloud security best practices (e. g., IAM, network policies). Assist in maintaining and evolving Infrastructure as Code (IaC) solutions. Requirements Cloud Platforms: At least 2 years of hands-on experience with Amazon Web Services (AWS) and/or Google Cloud Platform (GCP), including core compute, storage, networking, and database services (e. g., EC2 S3 VPC, RDS, GCE, GCS, Cloud SQL). On-Premise infrastructure: Setup, automation, and management. Operating Systems: Proficiency in Linux environments and shell scripting (Bash). Scripting/Programming: Foundational knowledge and practical experience with Python for automation. Containerization: Familiarity with Docker concepts and practical usage. Basic understanding of container orchestration concepts (e. g., Kubernetes). CI/CD: Understanding of Continuous Integration/Continuous Delivery principles and experience with at least one CI/CD tool (e. g., Jenkins, GitLab CI, CircleCI, GitHub Actions). Familiarity with build and release automation concepts. Version Control: Solid experience with Git for code management. Monitoring: Experience with basic monitoring and alerting tools (e. g., AWS CloudWatch, Grafana). Familiarity with log management concepts. Networking: Basic understanding of networking fundamentals (DNS, Load Balancers, VPCs). Infrastructure as Code (IaC): Basic understanding of Infrastructure as Code (IaC) principles. Good To Have Skills & Qualifications Cloud Platforms: Hands-on experience with both AWS and GCP. Hybrid & On-Premise Cloud Architectures: Hands-on experience with VMware vSphere / Oracle OCI or any on-premises infrastructure platform. Infrastructure as Code (IaC): Hands-on experience with Terraform or AWS CloudFormation. Container Orchestration: Hands-on experience with Kubernetes (EKS, GKE). Databases: Familiarity with PostgreSQL and Redis administration and optimization. Security Practices: Exposure to security practices like SAST/SCA or familiarity with IAM best practices beyond basics. Awareness of secrets management concepts (e. g., HashiCorp Vault, AWS Secrets Manager) and vulnerability management processes. Observability Stacks: Experience with centralized logging (e. g., ELK Stack, Loki) or distributed tracing (e. g., Jaeger, Zipkin, Tempo). Serverless: Familiarity with serverless technologies (e. g., AWS Lambda, Google Cloud Functions). On-call/Incident Management Tools: Familiarity with on-call rotation and incident management tools (e. g., PagerDuty). DevOps Culture: A strong passion for automation, continuous improvement, and knowledge sharing. Configuration Management: Experience with tools like Ansible for automating software provisioning, configuration management, and application deployment, especially in on-premise environments. Soft Skills Strong verbal and written communication skills, with an ability to collaborate effectively across technical and non-technical teams. Excellent problem-solving abilities and a proactive, inquisitive mindset. Eagerness to learn new technologies and adapt to evolving environments. Ability to work independently and contribute effectively as part of a cross-functional team. This job was posted by Ronald J from Inito.
Posted 3 days ago
10.0 - 14.0 years
0 Lacs
pune, maharashtra
On-site
As a Java Developer specializing in observability and telemetry, your role will involve designing, developing, and maintaining Java-based microservices and applications. Your focus will be on implementing best practices for instrumenting, collecting, analyzing, and visualizing telemetry data to monitor and troubleshoot system behavior and performance. Collaboration with cross-functional teams will be key as you integrate observability solutions into the software development lifecycle, including CI/CD pipelines and automated testing frameworks. By driving improvements in system reliability, scalability, and performance through data-driven insights and continuous feedback loops, you will play a crucial role in ensuring our systems remain innovative. Staying up-to-date with emerging technologies and industry trends in observability, telemetry, and distributed systems is essential to keep our systems at the forefront of innovation. Moreover, mentoring junior developers and providing technical guidance and expertise in observability and telemetry practices will be part of your responsibilities. To excel in this role, you should have a Bachelor's or master's degree in computer science, engineering, or a related field, along with over 10 years of professional experience in software development with a strong emphasis on Java programming. Expertise in observability and telemetry tools such as Prometheus, Grafana, Jaeger, ELK stack (Elasticsearch, Logstash, Kibana), and distributed tracing is required. A solid understanding of microservices architecture, containerization (Docker, Kubernetes), and cloud-native technologies (AWS, Azure, GCP) is essential. You should also demonstrate proficiency in designing and implementing scalable, high-performance, and fault-tolerant systems, coupled with strong analytical and problem-solving skills to troubleshoot complex issues effectively. Excellent communication and collaboration skills are paramount for success in this role, enabling you to work efficiently in a fast-paced, agile environment. Experience with Agile methodologies and DevOps practices would be advantageous in fulfilling your responsibilities effectively.,
Posted 4 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Role We are looking for an experienced DevOps Engineer to join our engineering team. This role involves setting up, managing, and scaling development, staging, and production environments both on AWS cloud and on-premise (open source stack) . You will be responsible for CI/CD pipelines, infrastructure automation, monitoring, container orchestration, and model deployment workflows for our enterprise applications and AI platform. Key Responsibilities Infrastructure Setup & Management Design and implement cloud-native architectures on AWS and be able to manage on-premise open source environments when required . Automate infrastructure provisioning using tools like Terraform or CloudFormation. Maintain scalable environments for dev, staging, and production . CI/CD & Release Management Build and maintain CI/CD pipelines for backend, frontend, and AI workloads. Enable automated testing, security scanning, and artifact deployments. Manage configuration and secret management across environments. Containerization & Orchestration Manage Docker-based containerization and Kubernetes clusters (EKS, self-managed K8s) . Implement service mesh, auto-scaling, and rolling updates. Monitoring, Security, and Reliability Implement observability (logging, metrics, tracing) using open source or cloud tools. Ensure security best practices across infrastructure, pipelines, and deployed services. Troubleshoot incidents, manage disaster recovery, and support high availability. Model DevOps / MLOps Set up pipelines for AI/ML model deployment and monitoring (LLMOps). Support data pipelines, vector databases, and model hosting for AI applications. Required Skills and Qualifications Cloud & Infra Strong expertise in AWS services : EC2, ECS/EKS, S3, IAM, RDS, Lambda, API Gateway, etc. Ability to set up and manage on-premise or hybrid environments using open source tools. DevOps & Automation Hands-on experience with Terraform / CloudFormation . Strong skills in CI/CD tools such as GitHub Actions, Jenkins, GitLab CI/CD, or ArgoCD. Containerization & Orchestration Expertise with Docker and Kubernetes (EKS or self-hosted). Familiarity with Helm charts, service mesh (Istio/Linkerd). Monitoring / Observability Tools Experience with Prometheus, Grafana, ELK/EFK stack, CloudWatch . Knowledge of distributed tracing tools like Jaeger or OpenTelemetry. Security & Compliance Understanding of cloud security best practices . Familiarity with tools like Vault, AWS Secrets Manager. Model DevOps / MLOps Tools (Preferred) Experience with MLflow, Kubeflow, BentoML, Weights & Biases (W&B) . Exposure to vector databases (pgvector, Pinecone) and AI pipeline automation . Preferred Qualifications Knowledge of cost optimization for cloud and hybrid infrastructures . Exposure to infrastructure as code (IaC) best practices and GitOps workflows. Familiarity with serverless and event-driven architectures . Education Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). What We Offer Opportunity to work on modern cloud-native systems and AI-powered platforms . Exposure to hybrid environments (AWS and open source on-prem). Competitive salary, benefits, and growth-oriented culture.
Posted 4 days ago
10.0 - 14.0 years
0 Lacs
haryana
On-site
You lead the way. We've got your back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities, and each other. Here, you'll learn and grow as we help you create a career journey that's unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you'll be recognized for your contributions, leadership, and impact. Every colleague has the opportunity to share in the company's success. Together, we'll win as a team, striving to uphold our company values and powerful backing promise to provide the world's best customer experience every day. And we'll do it with the utmost integrity, in an environment where everyone is seen, heard, and feels like they belong. Join Team Amex and let's lead the way together. About Enterprise Architecture: Enterprise Architecture is an organization within the Chief Technology Office at American Express and is a key enabler of the company's technology strategy. The four pillars of Enterprise Architecture include: - Architecture as Code: This pillar owns and operates foundational technologies leveraged by engineering teams across the enterprise. - Architecture as Design: This pillar includes the solution and technical design for transformation programs and business critical projects requiring architectural guidance and support. - Governance: Responsible for defining technical standards and developing innovative tools that automate controls to ensure compliance. - Colleague Enablement: Focused on colleague development, recognition, training, and enterprise outreach. What you will be working on: We are looking for a Senior Engineer to join our Enterprise Architecture team. In this role, you will be designing and implementing highly scalable real-time systems following best practices and using cutting-edge technology. This role is best suited for experienced engineers with a broad skillset who are open, curious, and willing to learn. Qualifications: What you will Bring: - Bachelor's degree in computer science, computer engineering, or a related field, or equivalent experience. - 10+ years of progressive experience demonstrating strong architecture, programming, and engineering skills. - Firm grasp of data structures, algorithms with fluency in programming languages like Java, Kotlin, Go. - Ability to lead, partner, and collaborate cross-functionally across engineering organizations. - Experience in building real-time large-scale, high-volume, distributed data pipelines on top of data buses (Kafka). - Hands-on experience with large-scale distributed NoSQL databases like Elasticsearch. - Knowledge and/or experience with containerized environments, Kubernetes, Docker. - Knowledge and/or experience with public cloud platforms like AWS, GCP. - Experience in implementing and maintaining highly scalable microservices in Rest, GRPC. - Experience in working with infrastructure layers like service mesh, Istio, Envoy. - Appetite for trying new things and building rapid POCs. Preferred Qualifications: - Knowledge of Observability concepts like Tracing, Metrics, Monitoring, Logging. - Knowledge of Prometheus. - Knowledge of OpenTelemetry / OpenTracing. - Knowledge of observability tools like Jaeger, Kibana, Grafana, etc. - Open-source community involvement. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones" physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: - Competitive base salaries. - Bonus incentives. - Support for financial well-being and retirement. - Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location). - Flexible working model with hybrid, onsite, or virtual arrangements depending on role and business need. - Generous paid parental leave policies (depending on your location). - Free access to global on-site wellness centers staffed with nurses and doctors (depending on location). - Free and confidential counseling support through our Healthy Minds program. - Career development and training opportunities. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.,
Posted 1 week ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description 66degrees is seeking a Senior Consultant with specialized expertise in AWS, Resources will lead and scale cloud infrastructure, ensuring high availability, automation, and security across AWS, GCP and Kubernetes environments. You will be responsible for designing and maintaining highly scalable, resilient, and cost- optimized infrastructure while implementing best-in-class DevOps practices, CI/CD pipelines, and observability solutions. As a key part of our clients platform engineering team, you will collaborate closely with developers, SREs, and security teams to automate workflows, optimize cloud performance, and build the backbone of their microservices candidates should have the ability to overlap with US working hours, be open to occasional weekend work and be local to offices in either Noida, or Gurgaon, India as this is an in-office opportunity. Qualifications 7+ years of hands-on DevOps experience with proven expertise in AWS; involvement in SRE or Platform Engineering roles is desirable. Experience handling high-throughput workloads with occasional spikes. Prior industry experience with live sports and media streaming. Deep knowledge of Kubernetes architecture, managing workloads, networking, RBAC and autoscaling is required. Expertise in AWS Platform with hands-on VCP, IAM, EC2, Lambda, RDS, EKS and S3 experience is required; the ability to learn GCP with GKE is desired. Experience with Terraform for automated cloud provisioning; Helm is desired. Experience with FinOps principles for cost-optimization in cloud environments is required. Hands-on experience building highly automated CI/CD pipelines using Jenkins, ArgoCD, and GitHub Actions. Hands-on experience with service mesh technologies (Istio, Linkerd, Consul) is required. Knowledge of monitoring tools such as CloudWatch, Google Logging, and distributed tracing tools like Jaeger; experience with Prometheus and Grafana is desirable. Proficiency in Python and/or Go for automation, infrastructure tooling, and performance tuning is highly desirable. Strong knowledge of DNS, routing, load balancing, VPN, firewalls, WAF, TLS, and IAM. Experience managing MongoDB, Kafka or Pulsar for large-scale data processing is desirable. Proven ability to troubleshoot production issues, optimize system performance, and prevent downtime. Knowledge of multi-region disaster recovery and high-availability architectures. Desired Contributions to open-source DevOps projects or strong technical blogging presence. Experience with KEDA-based autoscaling in Kubernetes. (ref:hirist.tech)
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
Pune
Work from Office
Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Participate in all aspects of SaaS product development, from requirements analysis to product release and sustaining Drive the adoption of the DevOps process and tools across the organization. Learn and implement cutting-edge technologies and tools to build best of class enterprise SaaS solutions Deliver high-quality enterprise SaaS offerings on schedule Develop Continuous Delivery Pipeline Initiate projects and ideas to improve the teams results On-board and mentor new employees To ensure youre set up for success, you will bring the following skillset & experience: You can embrace, live and breathe our BMC values every day! You have at least 7 years of experience in a DevOps\SRE role You have experience as a Tech Lead You implemented CI\CD pipelines with best practices You have experience in Kubernetes You have knowledge in AWS\Azure Cloud implementation You worked with GIT repository and JIRA tools You are passionate about quality and demonstrate creativity and innovation in enhancing the product. You are a problem-solver with good analytical skills You are a team player with effective communication skills Whilst these are nice to have, our team can help you develop in the following skills: SRE practices GitHub/ Spinnaker/Jenkins/Maven/ JIRA etc. Automation Playbooks (Ansible) Infrastructure-as-a-code (IaaC) using Terraform/Cloud Formation Template/ ARM Template Scripting in Bash/Python/Go Microservices, Database, API implementation Monitoring Tools, such as Prometheus/Jager/Grafana /AppDynamic, DataDog, Nagios etc.) Agile/Scrum process
Posted 1 week ago
10.0 years
3 - 10 Lacs
Gurgaon
On-site
You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. About Enterprise Architecture: Enterprise Architecture is an organization within the Chief Technology Office at American Express and it is a key enabler of the company’s technology strategy. The four pillars of Enterprise Architecture include: 1. Architecture as Code : this pillar owns and operates foundational technologies that are leveraged by engineering teams across the enterprise. 2. Architecture as Design : this pillar includes the solution and technical design for transformation programs and business critical projects which need architectural guidance and support. 3. Governance : this pillar is responsible for defining technical standards, and developing innovative tools that automate controls to ensure compliance. 4. Colleague Enablement: this pillar is focused on colleague development, recognition, training, and enterprise outreach. What you will be working on: We are looking for a Senior Engineer to join our Enterprise Architecture team. In this role you will be designing and implementing highly scalable real-time systems following the best practices and using the cutting-edge technology. This role is best suited for experienced engineers with broad skillset who are open, curious and willing to learn. Qualifications : What you will Bring: Bachelor's degree in computer science, computer engineering or a related field, or equivalent experience 10+ years of progressive experience demonstrating strong architecture, programming and engineering skills. Firm grasp of data structures, algorithms with fluency in programming languages like Java, Kotlin, Go Demonstrated ability to lead, partner, and collaborate cross functionally across many engineering organizations Experience in building real-time large scale, high volume, distributed data pipelines on top of data buses (Kafka). Hands on experience with large scale distributed NoSQL databases like Elasticsearch Knowledge and/or experience with containerized environments, Kubernetes, docker. Knowledge and/or experience with any of the public cloud platforms like AWS, GCP. Experience in implementing and maintained highly scalable micro services in Rest, GRPC Experience in working infrastructure layers like service mesh, istio , envoy. Appetite for trying new things and building rapid POCs Preferred Qualifications: Knowledge of Observability concepts like Tracing, Metrics, Monitoring, Logging Knowledge of Prometheus Knowledge of OpenTelemetry / OpenTracing Knowledge of observability tools like Jaeger, Kibana, Grafana etc. Open-source community involvement We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 1 week ago
10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
You Lead the Way. We’ve Got Your Back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities and each other. Here, you’ll learn and grow as we help you create a career journey that’s unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you’ll be recognized for your contributions, leadership, and impact—every colleague has the opportunity to share in the company’s success. Together, we’ll win as a team, striving to uphold our company values and powerful backing promise to provide the world’s best customer experience every day. And we’ll do it with the utmost integrity, and in an environment where everyone is seen, heard and feels like they belong. Join Team Amex and let's lead the way together. About Enterprise Architecture: Enterprise Architecture is an organization within the Chief Technology Office at American Express and it is a key enabler of the company’s technology strategy. The four pillars of Enterprise Architecture include: 1. Architecture as Code : this pillar owns and operates foundational technologies that are leveraged by engineering teams across the enterprise. 2. Architecture as Design : this pillar includes the solution and technical design for transformation programs and business critical projects which need architectural guidance and support. 3. Governance : this pillar is responsible for defining technical standards, and developing innovative tools that automate controls to ensure compliance. 4. Colleague Enablement: this pillar is focused on colleague development, recognition, training, and enterprise outreach. What you will be working on: We are looking for a Senior Engineer to join our Enterprise Architecture team. In this role you will be designing and implementing highly scalable real-time systems following the best practices and using the cutting-edge technology. This role is best suited for experienced engineers with broad skillset who are open, curious and willing to learn. Qualifications : What you will Bring: Bachelor's degree in computer science, computer engineering or a related field, or equivalent experience 10+ years of progressive experience demonstrating strong architecture, programming and engineering skills. Firm grasp of data structures, algorithms with fluency in programming languages like Java, Kotlin, Go Demonstrated ability to lead, partner, and collaborate cross functionally across many engineering organizations Experience in building real-time large scale, high volume, distributed data pipelines on top of data buses (Kafka). Hands on experience with large scale distributed NoSQL databases like Elasticsearch Knowledge and/or experience with containerized environments, Kubernetes, docker. Knowledge and/or experience with any of the public cloud platforms like AWS, GCP. Experience in implementing and maintained highly scalable micro services in Rest, GRPC Experience in working infrastructure layers like service mesh, istio , envoy. Appetite for trying new things and building rapid POCs Preferred Qualifications: Knowledge of Observability concepts like Tracing, Metrics, Monitoring, Logging Knowledge of Prometheus Knowledge of OpenTelemetry / OpenTracing Knowledge of observability tools like Jaeger, Kibana, Grafana etc. Open-source community involvement We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.
Posted 1 week ago
5.0 years
15 - 18 Lacs
Hyderābād
On-site
Site Reliability Engineer We are building scalable, reliable, and high-performance cloud-native applications on Microsoft Azure. We are seeking a talented and passionate Site Reliability Engineer (SRE) to join our team, focusing on establishing robust observability with OpenTelemetry and driving operational excellence across our Azure infrastructure. Role Overview: As an SRE with OpenTelemetry and Azure expertise, you will play a critical role in ensuring the availability, performance, and scalability of our production systems. You will be responsible for designing, implementing, and maintaining our observability stack using OpenTelemetry standards, integrating it seamlessly with Azure services, and applying SRE principles to build resilient and efficient systems. You will work closely with development teams to embed reliability from the ground up, automate operational tasks, and respond to incidents with speed and precision. Requirements Key Responsibilities: OTEL Monitoring Setup & Observability: Design, implement, and manage a comprehensive observability platform using OpenTelemetry for distributed tracing, metrics, and logs across our microservices and applications. Ensure full instrumentation of applications (e.g., Java, Python, Node.js) to capture end-to-end telemetry data. Configure and optimize OpenTelemetry Collectors to receive, process, and export telemetry data to various backends (e.g., Prometheus, Grafana, Application Insights, Jaeger, Loki, Tempo and Azure Monitor). Develop custom instrumentation and semantic conventions to enhance monitoring capabilities and provide deeper insights into application behavior. Establish robust alerting and anomaly detection based on OpenTelemetry signals, utilizing tools like Azure Monitor, Prometheus Alert manager, or similar. Create informative and actionable dashboards (e.g., Grafana, Azure Dashboards) for real-time system insights, performance monitoring, and incident response. Continuously evaluate and integrate new OpenTelemetry features and best practices to improve our observability posture. Azure SRE Capabilities: Reliability & Performance Engineering: Monitor system performance, reliability, and availability metrics across Azure services. Identify bottlenecks, anticipate scaling needs, and implement strategies to reduce downtime and improve performance. Incident Management & Response: Participate in on-call rotations, lead incident response efforts, conduct thorough root cause analysis (RCA), and implement preventative measures to minimize recurrence. Develop and maintain runbooks and playbooks for effective incident resolution. Automation & Infrastructure as Code (IaC): Automate repetitive operational tasks, deployments, and infrastructure provisioning using Azure DevOps, Terraform, Azure Bicep, PowerShell, or Bash scripting. CI/CD Integration: Integrate observability checks and validation steps into CI/CD pipelines to ensure the reliability and performance of new releases. Capacity Planning & Cost Optimization: Conduct capacity planning, analyze usage patterns, and optimize Azure resources for cost efficiency, performance, and scalability. Security & Compliance: Implement and enforce security best practices within Azure environments, collaborate with security teams, and ensure adherence to relevant compliance standards. Collaboration & Mentorship: Work closely with development teams to foster a culture of reliability, provide guidance on observability best practices, and share knowledge across the organization. Required Skills and Experience: 5+ years of experience in a Site Reliability Engineering (SRE), DevOps, or a similar infrastructure-focused role. Deep practical experience with OpenTelemetry (OTEL) for instrumenting, collecting, processing, and exporting traces, metrics, and logs. Strong proficiency in Azure cloud services and their monitoring capabilities (Azure Monitor, Log Analytics, Application Insights). Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, Azure Bicep, or ARM templates. Solid scripting and automation skills (e.g., Python, PowerShell, Bash). Experience with containerization technologies (Docker) and orchestration platforms (Kubernetes/AKS). Expertise with various observability backends like Grafana, Alloy, Loki, Tempo, Prometheus, Jaeger. Strong understanding of distributed systems, microservices architectures, and cloud-native principles. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication and collaboration abilities. Preferred Qualifications: Azure certifications (e.g., AZ-104 Azure Administrator, AZ-400 Azure DevOps Engineer Expert). Experience with chaos engineering practices. Understanding of Service Level Objectives (SLOs), Service Level Indicators (SLIs), and error budgets. Familiarity with database monitoring (e.g., PostgreSQL, Azure SQL). Experience in a high-availability, regulated, or customer-facing environment. Education: Bachelor's degree in Computer Science, Information Technology, or a related technical field, or equivalent practical experience. Job Type: Full-time Pay: ₹130,000.00 - ₹150,000.00 per month Experience: Site Reliability Engineering: 7 years (Required) DevOps: 6 years (Required) OpenTelemetry: 5 years (Required) Azure cloud services : 6 years (Required) orchestration platforms (Kubernetes/AKS): 5 years (Required) Work Location: In person
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Backend Developer at Tarento, a fast-growing technology consulting company, you will be a key part of our dynamic team that specializes in digital transformation, product engineering, and enterprise solutions for clients globally. We are known for combining Nordic values with Indian expertise to deliver innovative and high-impact solutions across various industries including retail, manufacturing, and healthcare. Recognized as a Great Place to Work, Tarento fosters an inclusive culture that values ideas, promotes continuous learning, and prioritizes employee well-being and growth. Joining our collaborative environment means being part of a team where your passion and purpose can drive your career forward. Your role will involve designing, developing, and maintaining scalable backend systems using your expertise in Python, FastAPI, SQL, and familiarity with technologies like Docker, Jenkins, Redis, Kafka, and Hive. You will work closely with cross-functional teams to deliver new features and APIs, gather requirements from stakeholders, and ensure seamless integration of functionalities. Troubleshooting production issues, monitoring system performance, and automating operational tasks will be integral parts of your responsibilities. To succeed in this role, you should be proficient in Python, experienced with FastAPI or similar frameworks, and have a strong grasp of SQL and relational databases. Your knowledge of containerization, CI/CD pipelines, RESTful APIs, and monitoring tools like New Relic and Jaeger will be crucial. The ability to conduct unit and integration testing, collaborate in code reviews, and document processes will be essential in maintaining code quality and performance. We are looking for individuals with strong problem-solving skills, attention to detail, and excellent communication abilities to thrive in our fast-paced environment. If you have experience in microservices architecture, Agile/Scrum methodologies, or a background in a startup or product-based environment, that would be a definite plus. Join us at Tarento, where your talent and skills can make a real difference in shaping the future of technology solutions for our diverse client base.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough