Jobs
Interviews

994 Gitops Jobs - Page 27

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

40 Lacs

Kolkata, West Bengal, India

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

7.0 years

40 Lacs

Bhubaneswar, Odisha, India

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

7.0 years

40 Lacs

Cuttack, Odisha, India

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

7.0 years

40 Lacs

Guwahati, Assam, India

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

7.0 years

40 Lacs

Jamshedpur, Jharkhand, India

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

7.0 years

40 Lacs

Raipur, Chhattisgarh, India

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

7.0 years

40 Lacs

Ranchi, Jharkhand, India

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

7.0 years

40 Lacs

Amritsar, Punjab, India

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

Design, provision, and document a production-grade AWS micro-service platform for a Apache-powered ERP implementation—hitting our 90-day “go-live” target while embedding DevSecOps guard-rails the team can run without you. Key Responsibilities Cloud Architecture & IaC Author Terraform modules for VPC, EKS (Graviton), RDS (MariaDB Multi-AZ), MSK, ElastiCache, S3 lifecycle, API Gateway, WAF, Route 53. Implement node pools (App, Spot Analytics, Cache, GPU) with Karpenter autoscaling. CI/CD & GitOps Set up GitHub Actions pipelines (lint, unit tests, container scan, Terraform Plan). Deploy Argo CD for Helm-based application roll-outs (ERP, Bot, Superset, etc.). DevSecOps Controls Enforce OPA Gatekeeper policies, IAM IRSA, Secrets Manager, AWS WAF rules, ECR image scanning. Build CloudWatch/X-Ray dashboards; wire alerting to Slack/email. Automation & DR Define backup plans (RDS PITR, EBS, S3 Std-IA → Glacier). Document cross-Region fail-over run-book (Route 53 health-checks). Standard Operating Procedures Draft SOPs for patching, scaling, on-call, incident triage, budget monitoring. Knowledge Transfer (KT) Run 3× 2-hour remote workshops (infra deep-dive, CI/CD hand-over, DR drill). Produce “Day-2” wiki: diagrams (Mermaid), run-books, FAQ. Required Skill Set 8+ yrs designing AWS micro-service / Kubernetes architectures (ideally EKS on Graviton). Expert in Terraform , Helm , GitHub Actions , Argo CD . Hands-on with RDS MariaDB , Kafka (MSK) , Redis , SageMaker endpoints . Proven DevSecOps background: OPA, IAM least-privilege, vulnerability scanning. Comfortable translating infra diagrams into plain-language SOPs for non-cloud staff. Nice-to-have: prior ERP deployment experience; WhatsApp Business API integration; EPC or construction IT domain knowledge. How Success Is Measured Go-live readiness — Production cluster passes load, fail-over, and security tests by Day 75. Zero critical CVEs exposed in final Trivy scan. 99 % IaC coverage — manual console changes not permitted. Team self-sufficiency — internal staff can recreate the stack from scratch using docs + KT alone. Show more Show less

Posted 1 month ago

Apply

5.0 - 7.0 years

5 - 7 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

We are seeking a motivated and skilled OpenShift DevOps Engineer to join our team. In this role, you will be responsible for building, deploying, and maintaining our applications on the OpenShift platform using CI/CD best practices. You will work closely with developers and other operations team members to ensure smooth and efficient delivery of software updates. Responsibilities: Collaborate with customers to understand their specific requirements. Stay up-to-date with industry trends and emerging technologies. Prepare and maintain documentation for processes and procedures. Participate in on-call support and incident response, as needed. Good knowledge of virtual networking and storage configuration . Working experience with LINUX . Hands-on experience in K8s services, load balancing & networking modules . Proficient in security, firewall, storage concepts . Implement and manage OpenShift environments , including deployment configurations, cluster management, and resource optimization. Design and implement CI/CD pipelines using tools like OpenShift Pipelines, GitOps, or other industry standards. Automate build, test, and deployment processes for applications on OpenShift. Troubleshoot and resolve issues related to OpenShift deployments and CI/CD pipelines. Collaborate with developers and other IT professionals to ensure smooth delivery of software updates. Stay up-to-date on the latest trends and innovations in OpenShift and CI/CD technologies. Participate in the continuous improvement of our DevOps practices and processes. Qualifications: Bachelor's degree in Computer Science, or a related field (or equivalent work experience). Familiarity with infrastructure as code (IaC) tools (e.g., Terraform, Ansible). Excellent problem-solving, communication, and teamwork skills. Experience working in Agile/Scrum or other collaborative development environments. Flexible to work in a 24/7 support environment . Proven experience as a DevOps Engineer or similar role . Strong understanding of OpenShift platform administration and configuration . Experience with CI/CD practices and tools , preferably OpenShift Pipelines, GitOps, or similar options. Experience with containerization technologies (Docker, Kubernetes) . Experience with scripting languages (Python, Bash) . Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Ability to work independently and as part of a team. Good to have: Experience with cloud platforms (AWS, Azure, GCP) . Experience with Infrastructure as Code (IaC) tools (Terraform, Ansible) . Experience with security best practices for DevOps pipelines .

Posted 1 month ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Immediate/Early Joiners Preferred We are hiring for our client seeking an experienced Kubernetes Infrastructure Specialist to design, automate, and optimize advanced container platforms and virtualization solutions in a high-tech environment. Key Responsibilities Design, create, automate, and maintain CI/CD pipelines for OS image creation on both bare-metal and virtual machines. Independently create and apply hypervisor templates. Automate infrastructure processes to minimize operational overhead. Manage and optimize container platform environments (test, acceptance, production) and 150+ downstream clusters. Enhance container platforms, edge computing, and virtualization solutions. Ensure platform security and compliance with best practices. Lead the transition from VMware OVAs to Kubernetes-based virtualization. Liaise with stakeholders for project approvals and compliance. Desired Profile 8+ years of hands-on experience in Kubernetes, IT infrastructure, and Linux-based systems. Proven expertise in: CI/CD pipelines for infrastructure and OS image provisioning. Unattended provisioning of Kubernetes clusters across hypervisor/cloud. SUSE Rancher, Azure Kubernetes Service (AKS), Linux (Ubuntu/SUSE), SUSE Longhorn, VMware vSphere. Infrastructure-as-Code tools (Terraform/Terragrunt, Ansible, GitOps). Virtualization technologies and Azure hosting platforms. Strong customer focus, proactive ownership, and ability to lead and act independently. Fluent business-level English communication. Experience in mature, enterprise environments with high responsibility. You may also share your resume with us at cv@refrelay.com Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Role : Lead DevOps Engineer Location : Hyderabad , Ahmedabad Early Joiners Preferred. Responsibilities: Lead the design, implementation, and management of enterprise Container orchestration platform using Rafey and Kubernetes. Oversee the onboarding and deployment of applications on Rafey platforms utilizing AWS EKS and Azure AKS. Develop and maintain CI/CD pipelines to ensure efficient and reliable application deployment using Azure DevOps. Collaborate with cross-functional teams to ensure seamless integration and operation of containerized applications. Implement and manage infrastructure as code using tools such as Terraform. Ensure the security, reliability, and scalability of containerized applications and infrastructure. Mentor and guide junior DevOps engineers, fostering a culture of continuous improvement and innovation. Monitor and optimize system performance, troubleshooting issues as they arise. Stay up-to-date with industry trends and best practices, incorporating them into the team's workflows. Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 10+ years of experience in DevOps, with a focus on Container orchestration platform. Extensive hands-on experience on Kubernetes, EKS, AKS. Good to have knowledge on Rafey platform (A Kubernetes Management Platform) Proven track record of onboarding and deploying applications on Kubernetes platforms including AWS EKS and Azure AKS. Hands-on experience to write Kubernetes manifest files. Strong knowledge on Kubernetes Ingress and Ingress Controllers . Strong knowledge of Azure DevOps CI/CD pipelines and automation tools Proficiency in infrastructure as code tools (e.g., Terraform). Excellent problem-solving skills and the ability to troubleshoot complex issues. Should have knowledge on Secret Management and RBAC configuration Hands-on experience on Helm Charts Strong communication and collaboration skills. Experience with cloud platforms (AWS, Azure) and container orchestration. Knowledge of security best practices in a DevOps environment Preferred Skills: Strong Cloud knowledge (AWS & Azure) Strong Kubernetes knowledge Experience with other enterprise Container orchestration platform and tools. Familiarity with monitoring and logging tools (e.g., Datadog). Understanding of network topology and system architecture. Ability to work in a fast-paced, dynamic environment. Good to Have: Rafey platform knowledge ( A K8s Management Platform) Hands-on experience on GitOps technology Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Senior Detection Engineer / Threat Hunter Overview The next evolution of AI-powered cyber defense is here. With the rise of cloud and modern technologies, organizations struggle with the vast amount of data and thereby security alerts generated by their existing security tools. Cyberattacks continue to get more sophisticated and harder to detect in the sea of alerts and false positives. According to the Forrester 2023 Enterprise Breach Benchmark Report, a security breach costs organizations an average of $3M and takes organizations over 200 days to investigate and respond. AiStrike’s platform aims to reduce the time to investigate and respond to threats by over 90%. Our approach is to leverage the power of AI and machine learning to adopt an attacker mindset to prioritize and automate cyber threat investigation and response. The platform reduces alerts by 100:5 and provides detailed context and link analysis capabilities to investigate the alert. The platform also provides collaborative workflow and no code automation to cut down the time to respond to threats significantly. We’re seeking a senior-level Detection Engineer and Threat Hunter with deep expertise in modern SIEMs and a strong focus on AI-augmented threat detection and investigation. In this role, you’ll design scalable, modular detection content using Sigma, KQL, and platform-specific query languages — while working with AI to automate detection tuning, threat hunting hypotheses, and investigation workflows across enterprise and cloud environments. Key Responsibilities Develop high-fidelity, AI-ready detection templates to build detection rules in Sigma, KQL, SPL, Lucene, etc., for Microsoft Sentinel, Chronicle, Splunk, and Elastic. Leverage AI-powered engines to prioritize, cluster, and tune detection content dynamically based on environment behavior and telemetry changes. Identify visibility and data coverage gaps across cloud, identity, EDR, and SaaS log sources; work cross-functionally to close them. Lead proactive threat hunts driven by AI-assisted hypotheses, anomaly detection, and known threat actor TTPs. Contribute to AI-enhanced detection-as-code pipelines, integrating rules into CI/CD workflows and feedback loops. Collaborate with SOC, threat intel, and AI/data science teams to continuously evolve detection efficacy and reduce alert fatigue. Participate in adversary emulation, purple teaming, and post-incident reviews to drive continuous improvement. Required Skills 5+ years of hands-on experience in detection engineering, threat hunting, or security operations. Expert-level knowledge of at least two major SIEM platforms: Microsoft Sentinel, Google Chronicle, Splunk, Elastic, or similar. Strong proficiency in detection rule languages (Sigma, KQL, SPL, Lucene) and mapping to MITRE ATT&CK. Experience using or integrating AI/ML for detection enrichment, alert correlation, or anomaly-based hunting. Familiarity with telemetry sources (EDR, cloud, identity, DNS, proxy) and techniques to enrich or normalize them. Ability to document, test, and optimize detection rules and threat hunt queries in a modular, scalable fashion. Strong communication skills and the ability to translate complex threat scenarios into automated, AI-ready detection logic. Nice to Have Experience integrating AI/ML platforms for security analytics, behavior baselining, or entity risk scoring. Familiarity with detection-as-code and GitOps workflows for rule development, testing, and deployment. Scripting knowledge (Python, PowerShell) for enrichment, custom detection logic, or automation. Experience with purple teaming tools like Atomic Red Team, SCYTHE, or Caldera. AiStrike is committed to providing equal employment opportunities. All qualified applicants and employees will be considered for employment and advancement without regard to race, color, religion, creed, national origin, ancestry, sex, gender, gender identity, gender expression, physical or mental disability, age, genetic information, sexual or affectional orientation, marital status, status regarding public assistance, familial status, military or veteran status or any other status protected by applicable law. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana

On-site

Location Gurugram, Haryana, India This job is associated with 2 categories Job Id GGN00001744 Information Technology Job Type Full-Time Posted Date 06/16/2025 Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values: At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job overview and responsibilities United Airlines is seeking talented people to join the Data Engineering team. Data Engineering organization is responsible for driving data driven insights & innovation to support the data needs for commercial and operational projects with a digital focus. You will work as a Senior Engineer - Machine Learning and collaborate with data scientists and data engineers to: Build high-performance, cloud-native machine learning infrastructure and services to enable rapid innovation across United Build complex data ingestion and transformation pipelines for batch and real-time data Support large scale model training and serving pipelines in distributed and scalable environment This position is offered on local terms and conditions within United’s wholly owned subsidiary United Airlines Business Services Pvt. Ltd. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status, and other protected status as required by applicable law. Qualifications Required BS/BA, in Advanced Computer Science, Data Science, Engineering or related discipline or Mathematics experience required Strong software engineering experience with Python and at least one additional language such as Go, Java, or C/C++ Familiarity with ML methodologies and frameworks (e.g., PyTorch, Tensorflow) and preferably building and deploying production ML pipelines Experience developing cloud-native solutions with Docker and Kubernetes Cloud-native DevOps, CI/CD experience using tools such as Jenkins or AWS CodePipeline; preferably experience with GitOps using tools such as ArgoCD, Flux, or Jenkins X Experience building real-time and event-driven stream processing pipelines with technologies such as Kafka, Flink, and Spark Experience setting up and optimizing data stores (RDBMS/NoSQL) for production use in the ML app context Strong desire to stay aligned with the latest developments in cloud-native and ML ops/engineering and to experiment with and learn new technologies Experience 3 + years of software engineering experience with languages such as Python, Go, Java, Scala, Kotlin, or C/C++ 2 + years of experience working in cloud environments (AWS preferred) 2 + years of experience with Big Data technologies such as Spark, Flink 2 + years of experience with cloud-native DevOps, CI/CD At least one year of experience with Docker and Kubernetes in a production environment Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English and Hindi (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Preferred Masters in computer science or related STEM field

Posted 1 month ago

Apply

5.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Required Qualifications & Skills: 5+ years in DevOps, SRE, or Infrastructure Engineering. Strong expertise in Azure Cloud & Infrastructure-as-Code (Terraform, CloudFormation). Proficient in Docker & Kubernetes. Hands-on with CI/CD tools & scripting (Bash, Python, or Go). Strong knowledge of Linux, networking, and security best practices. Experience with monitoring & logging tools (ELK, Prometheus, Grafana). Familiarity with GitOps, Helm charts & automation Key Responsibilities: Design & manage CI/CD pipelines (Jenkins, GitLab CI/CD, GitHub Actions). Automate infrastructure provisioning (Terraform, Ansible, Pulumi). Monitor & optimize cloud environments Implement containerization & orchestration (Docker, Kubernetes - EKS/GKE/AKS). Maintain logging, monitoring & alerting (ELK, Prometheus, Grafana, Datadog). Ensure system security, availability & performance tuning. Manage secrets & credentials (Vault, Secrets Manager). Troubleshoot infrastructure & deployment issues. Implement blue-green & canary deployments. Collaborate with developers to enhance system reliability & productivity Preferred Skills: Certification -Azure Devops Engineer Experience with multi-cloud, microservices, event-driven systems. Exposure to AI/ML pipelines & data engineering workflows. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru

On-site

We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. About the team: SAP's Business Technology Platform is shaping the future of enterprise software, creating the ability to extend and personalize SAP applications, integrate, and connect entire landscapes, and empower business users to integrate processes and experiences. Our BTP AI team aims to be on top of the latest advancements in AI and how we apply these to increase the platform value for our customers and partners. It is a multi-disciplinary team of data engineers, engagement leads, development architects and developers that aims to support and deliver AI cases in the context of BTP. The Role: We are looking for a Full-Stack Engineer with AI skills, with a passion for strategic cloud platform topics and the field of generative AI. Generative artificial intelligence (GenAI) has emerged as a transformative force in society, and has the ability of creating, mimicking, and innovating a wide range of domains. This has implications for enterprise software and ultimately SAP. Become part of a multi-disciplinary team that focuses on execution and shaping the future of GenAI capabilities across our Business Technology Platform. This role requires a self-directed team player with deep coding knowledge and business acumen. In this role, you will be working closely with Architects, Data Engineers, UX designers and many others. Your responsibility as Full-Stack Engineer with AI skills will be to: Iterate rapidly, collaborating with product and design to launch to build PoCs and first versions of new products & features. Work with engineers across the company to ship modular and integrated products and features. You feel home at both Typescript/ Node.js/edit stack accordingly backend parts as well as being comfortable with the UI5/ JavaScript/React/edit stack accordingly frontend and other software technologies including Rest/ JSON. Design AI based solutions to complex problems and requirements in collaboration with others in a cross-functional team. Assess new technologies in the field of AI, tools, and infrastructure with which to evolve existing highly used functionalities and services in the cloud. Design, maintain, and optimize data infrastructure for data collection, management, transformation, and access. The Roles & Responsibilities: Critical thinking, innovative mindset, problem solving mindset Engineering/ master’s degree in computer science or related field with 5+ years professional experience in Software Development Extensive experience in the full life cycle of software development, from design and implementation to testing and deployment. Ability to thrive in a collaborative environment involving many different teams and stakeholders. You enjoy working with a diverse group of people with different expertise backgrounds and perspectives. Proficient in writing clean and scalable code using the programming languages in the AI and BTP technology stack. Aware of the fast-changing AI landscape and confident to suggest new, innovative ways to achieve product features. Ability to adapt to evolving technologies and industry trends in GenAI while working in a cross-functional team, showing creative problem-solving skills and customer-centricity. Nice to have Experience in data-centric programming languages (e.g. Python), SQL databases (e.g. SAP HANA Cloud), data modeling, integration, and schema design is a plus. Excellent communication skills with fluency in written and spoken English Tech you bring: SAP BTP, Cloud Foundry, Kyma, Docker, Kubernetes, SAP CAP, Jenkins, Git, GitOps; (Python #ICC25 #SAPInternalT3 Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability: Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 423948 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Summary As a Data Architect, you will play a pivotal role in defining and implementing common data models, API standards, and leveraging the Common Information Model (CIM) standard across a portfolio of products deployed in Critical National Infrastructure (CNI) environments globally. GE Vernova is the leading software provider for the operations of national and regional electricity grids worldwide. Our software solutions range from supporting electricity markets, enabling grid and network planning, to real-time electricity grid operations. In this senior technical role, you will collaborate closely with lead software architects to ensure secure, performant, and composable designs and implementations across our portfolio. Job Description Grid Software (a division of GE Vernova) is driving the vision of GridOS - a portfolio of software running on a common platform to meet the fast-changing needs of the energy sector and support the energy transition. Grid Software has extensive and well-established software stacks that are progressively being ported to a common microservice architecture, delivering a composable suite of applications. Simultaneously, new applications are being designed and built on the same common platform to provide innovative solutions that enable our customers to accelerate the energy transition. Responsibilities This role is for a senior data architect who understands the core designs, principles, and technologies of GridOS. Key responsibilities include: Formalizing Data Models and API Standards: Lead the formalization and standardization of data models and API standards across products to ensure interoperability and efficiency. Leveraging CIM Standards: Implement and advocate for the Common Information Model (CIM) standards to ensure consistent data representation and exchange across systems. Architecture Reviews and Coordination: Contribute to architecture reviews across the organization as part of Architecture Review Boards (ARB) and the Architecture Decision Record (ADR) process. Knowledge Transfer and Collaboration: Work with the Architecture SteerCo and Developer Standard Practices team to establish standard pratcise around data modeling and API design. Documentation: Ensure that data modeling and API standards are accurately documented and maintained in collaboration with documentation teams. Backlog Planning and Dependency Management: Work across software teams to prepare backlog planning, identify, and manage cross-team dependencies when it comes to data modeling and API requirements. Key Knowledge Areas and Expertise Data Architecture and Modeling: Extensive experience in designing and implementing data architectures and common data models. API Standards: Expertise in defining and implementing API standards to ensure seamless integration and data exchange between systems. Common Information Model (CIM): In-depth knowledge of CIM standards and their application within the energy sector. Data Mesh and Data Fabric: Understanding of data mesh and data fabric principles, enabling software composability and data-centric design trade-offs. Microservice Architecture: Understandig of microservice architecture and software development Kubernetes: Understanding of Kubernetes, including software development in an orchestrated microservice architecture. This includes Kubernetes API, custom resources, API aggregation, Helm, and manifest standardization. CI/CD and DevSecOps: Experience with CI/CD pipelines, DevSecOps practices, and GitOps, especially in secure, air-gapped environments. Mobile Software Architecture: Knowledge of mobile software architecture for field crew operations, offline support, and near-realtime operation. Additional Knowledge (Advantageous But Not Essential) Energy Industry Technologies: Familiarity with key technologies specific to the energy industry, such as Supervisory Control and Data Acquisition (SCADA), Geospatial network modeling, etc. This is a critical role within Grid Software, requiring a broad range of knowledge and strong organizational and communication skills to drive common architecture, software standards, and principles across the organization. Additional Information Relocation Assistance Provided: No Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Noida

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Do you have mobile applications installed on your devices? If so, chances are you've likely encountered our products. Ready to redefine the future of mobile experiences? The Adobe Experience Cloud Mobile team is integral to the Adobe Journey Optimizer and Adobe Experience Platform, tailoring personalized, multi-channel customer journeys and campaigns with unified real-time customer data. Empowering businesses to deliver seamless, personalized experiences across channels is our focus. We're looking for a Software Engineer who is hardworking, eager to learn new technologies, and ready to contribute to building scalable, performant services for large enterprises. Your role involves designing, developing, testing, and maintaining high-performance systems in multi-cloud/region environments. Join us in shaping the digital experiences of tomorrow and making a significant impact in an ambitious and rewarding environment. What you'll do Participate in all aspects of service development activities including design, prioritisation, coding, code review, testing, bug fixing, and deployment. Implement and maintain robust monitoring, alerting, and incident response to ensure the highest level of uptime and Quality of Service to customers through operational excellence. Participate in incident response efforts during significant impact events, and contribute to after-action investigations, reviews, and indicated improvements actions. Identify and address performance bottlenecks. Look for ways to continually improve the product and process. Build and maintain detailed documentation for software architecture, design, and implementation. Develop and evolve our test automation infrastructure to increase scale and velocity. Ensure quality around services and end-to-end experience of our products. Collaborate with multi-functional professionals (UI/SDK developers, product managers, Design, etc.) to resolve solutions. Participate in story mapping, daily stand-ups, retrospectives, and sprint planning/demos on a two-week cadence. Work independently on delivering sophisticated functionality. Fast prototype of ideas and concepts and research recent trends and technologies. Communicate clearly with the team and management to define & achieve goals. Mentor and grow junior team members. What you will need to succeed: B.S. in Computer Science or equivalent engineering degree 7+ years of experience crafting and developing web or software applications Strong communication and teamwork skills - building positive relationships with internal and external customers. Dedication to team-work, self-organization, and continuous improvement Proven experience in backend development, with expertise in languages such as Java, Node.js or Python. Experience in running cloud infrastructure, including hands-on experience with AWS or Azure, Kubernetes, GitOps, Terraform, Docker, CI/CD Experience in setting up SLAs/SLOs/SLIs for key services and establishing the monitoring around them Experience in writing functional/integration/performance tests and test frameworks Experience with both SQL and NoSQL Experience with Kafka and Zookeeper is a plus Experience with Mobile Application development is a plus Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At SolarWinds, we’re a people-first company. Our purpose is to enrich the lives of the people we serve—including our employees, customers, shareholders, Partners, and communities. Join us in our mission to help customers accelerate business transformation with simple, powerful, and secure solutions. The ideal candidate thrives in an innovative, fast-paced environment and is collaborative, accountable, ready, and empathetic. We’re looking for individuals who believe they can accomplish more as a team and create lasting growth for themselves and others. We hire based on attitude, competency, and commitment. Solarians are ready to advance our world-class solutions in a fast-paced environment and accept the challenge to lead with purpose. If you’re looking to build your career with an exceptional team, you’ve come to the right place. Join SolarWinds and grow with us! Summary: Join SolarWinds as a Senior Site Reliability Engineer (SRE) to help advance our development and production infrastructure and operations. In this role, you will collaborate with SRE and cross-functional engineering teams, implementing high-quality SRE practices and working on cloud infrastructure and system operations. Responsibilities: Work collaboratively with software engineering teams to define infrastructure and deployment requirements. Contribute to automation and observability initiatives. Develop and maintain operational tools for deployment, monitoring, and analysis of AWS and Azure infrastructure. Lead response to production incidents, conduct postmortems, and drive continuous improvement as part of 24/7 on-call rotations. Contribute to on-call documentation and incident response playbooks. Establish and drive operations performance through Service Level Objectives (SLOs). Adhere to development best practices, including continuous integration/deployment and code review. Seek mentorship and learning opportunities to demonstrate a commitment to continuous learning and professional development. Required Skills: At least 5+ years of experience designing, building, and maintaining SaaS environments. 4+ years of experience with AWS/Azure infrastructure using Terraform. Experience building and running Kubernetes clusters. Experience with observability tools (monitoring, logging, tracing, metrics). Experience with GitOps CI/CD processes. Proficiency in scripting with Python, Go, bash, or PowerShell, and AWS CLI tools. Experience with security operations, including policy and infrastructure security, key management, and encryption. Strong customer orientation and excellent communication skills. Collaborative problem-solving skills and a strong bias for ownership and action. SolarWinds is an Equal Employment Opportunity Employer. SolarWinds will consider all qualified applicants for employment without regard to race, color, religion, sex, age, national origin, sexual orientation, gender identity, marital status, disability, veteran status or any other characteristic protected by law. All applications are treated in accordance with the SolarWinds Privacy Notice: https://www.solarwinds.com/applicant-privacy-notice Show more Show less

Posted 1 month ago

Apply

6.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Job Description Job Summary We are seeking a Senior IAC Engineer to architect, develop, and automate D&A GCP PAAS services and Databricks platform provisioning using Terraform, Spacelift, and GitHub. This role combines the depth of platform engineering with the principles of reliability engineering, enabling resilient, secure, and scalable cloud environments. The ideal candidate has 6+ years of hands-on experience with IaC , CI/CD, infrastructure automation, and driving cloud infrastructure reliability. Key Responsibilities Infrastructure & Automation Design, implement, and manage modular, reusable Terraform modules to provision GCP resources (BigQuery, GCS, VPC, IAM, Pub/Sub, Composer, etc.). Automate provisioning of Databricks workspaces, clusters, jobs, service principals, and permissions using Terraform. Build and maintain CI/CD pipelines for infrastructure deployment and compliance using GitHub Actions and Spacelift. Standardize and enforce GitOps workflows for infrastructure changes, including code reviews and testing. Integrate infrastructure cost control, policy-as-code, and secrets management into automation pipelines. Architecture & Reliability Lead the design of scalable and highly reliable infrastructure patterns across GCP and Databricks. Implement resiliency and fault-tolerant designs, backup/recovery mechanisms, and automated alerting around infrastructure components. Partner with SRE and DevOps teams to enable observability, performance monitoring, and automated incident response tooling. Develop proactive monitoring and drift detection for Terraform-managed resources. Contribute to reliability reviews, runbooks, and disaster recovery strategies for cloud resources. Collaboration & Governance Work closely with security, networking, FinOps, and platform teams to ensure compliance, cost-efficiency, and best practices. Define Terraform standards, module registries, and access patterns for scalable infrastructure usage. Provide mentorship, peer code reviews, and knowledge sharing across engineering teams. Required Skills & Experience 6+ years of experience with Terraform and Infrastructure as Code (IaC), with deep expertise in GCP provisioning. Experience in automating Databricks (clusters, jobs, users, ACLs) using Terraform. Strong hands-on experience with Spacelift (or similar tools like Terraform Cloud or Atlantis) and GitHub CI/CD workflows. Deep understanding of infrastructure reliability principles: HA, fault tolerance, rollback strategies, and zero-downtime deployments. Familiar with monitoring/logging frameworks (Cloud Monitoring, Stackdriver, Datadog, etc.). Strong scripting and debugging skills to troubleshoot infrastructure or CI/CD failures. Proficient with GCP networking, IAM policies, folder/project structure, and Org Policy configuration. Nice to Have HashiCorp Certified: Terraform Associate or Architect. Familiarity with SRE principles (SLOs, error budgets, alerting). Exposure to FinOps strategies: cost controls, tagging policies, budget alerts. Experience with container orchestration (GKE/Kubernetes), Cloud Composer is a plus No Relocation support available Business Unit Summary At Mondelēz International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about. We have a rich portfolio of strong brands globally and locally including many household names such as Oreo , belVita and LU biscuits; Cadbury Dairy Milk , Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum. Our 80,000 makers and bakers are located in more than 80 countries and we sell our products in over 150 countries around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen—and happen fast. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn’t changed — we’re here to stop breaches, and we’ve redefined modern security with the world’s most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day. We have 3.44 PB of RAM deployed across our fleet of C* servers - and this traffic is growing daily. Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We’re also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We’re always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters? The future of cybersecurity starts with you. About The Role CrowdStrike is hiring a Sr. Engineer - Observability to help take our observability and tracing capabilities to the next level. We are looking for a highly-technical, hands-on Engineer with experience using several open source projects commonly found in large-scale deployments. Our team works to develop infrastructure services to support the CrowdStrike engineering teams' pursuit of a full DevOps model. What You'll Do Ensure a reliable and tested platform that provides comprehensive application performance monitoring and distributed tracing Architect, test and build a large scale observability platform leveraging Sentry and distributed tracing solutions Design and implement end-to-end tracing across microservices architecture Build, test and deliver kubernetes operators You will participate in 24x7 on-call rotations (monthly or bi-monthly) Participate in regular retros, capacity and planning meetings with your team, allowing team collaboration and discussions in a high fidelity manner Be part of "lunch and learn" demos: for new POCs or design sessions to work out new architectures Flexible working - with transparent communication, we encourage flexible working and a healthy work life balance Responsible for reviewing new design proposals from your peers What You'll Need Experience in Observability, with focus on error tracking and distributed tracing (Sentry, OpenTelemetry, Jaeger or similar solutions) Experience in software development, preferably with building Kubernetes operators using (Python, Bash or Go) Experience with large-scale, business-critical Linux environments Experience operating within the cloud, AWS & GCP preferred Experience with TDD, CI/CD, Chaos Engineering or similar resilience and reliability practices for infrastructure development 8+ years industry experience Bonus Points Proven ability to work effectively with both local and remote teams. Rock solid communication skills, verbal and written. A combination of confidence and independence, with the prudence to know when to ask for help from the rest of the team. Contributions and involvement in OSS projects Experience with being on-call Knowledge of SRE, Devops and Gitops practices Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Resource Groups, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified™ across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are looking for an experienced GCP DevOps Engineer to join our growing cloud and infrastructure team. If you're passionate about automation, cloud-native technologies, and building scalable, secure infrastructure on Google Cloud Platform (GCP), we want to hear from you! Key Responsibilities: Design, implement, and manage CI/CD pipelines using tools like Jenkins, GitLab CI, or Cloud Build. Automate infrastructure provisioning and configuration management using Terraform, Ansible, or similar IaC tools. Monitor and optimize cloud infrastructure performance, cost, and security using GCP-native tools (e.g., Stackdriver, Cloud Monitoring, Logging, and Security Command Center). Set up and manage containerized workloads using Kubernetes (GKE) and Docker. Implement DevSecOps practices by integrating security at every phase of the software development lifecycle. Collaborate with development, security, and operations teams to streamline deployment processes and improve system reliability. Ensure high availability, disaster recovery, and backup strategies are in place and regularly tested. Troubleshoot production issues and participate in incident response and root cause analysis. Required Skills and Experience: 5–8 years of hands-on experience in DevOps or SRE roles. Strong expertise in Google Cloud Platform (GCP) services such as Compute Engine, Cloud Storage, VPC, IAM, Cloud Functions, Cloud Run, Pub/Sub, etc. Proficiency in infrastructure as code using Terraform or Deployment Manager . Deep understanding of CI/CD pipelines , version control (Git), and deployment automation. Experience with Kubernetes (preferably GKE) and container orchestration. Familiarity with scripting languages like Python, Bash, or Go. Knowledge of monitoring/logging solutions: Prometheus, Grafana, Stackdriver, ELK, etc. Solid understanding of networking, firewalls, DNS, and load balancers in cloud environments. Exposure to security and compliance best practices in cloud deployments. Preferred Qualifications: GCP certifications (e.g., Professional Cloud DevOps Engineer, Associate Cloud Engineer). Experience with hybrid cloud or multi-cloud environments. Working knowledge of other cloud platforms (AWS, Azure) is a plus. Exposure to GitOps practices and tools like ArgoCD or FluxCD. What We Offer: Opportunity to work on modern, cloud-native projects Flexible work arrangements A collaborative and learning-focused environment Competitive salary and benefits Show more Show less

Posted 1 month ago

Apply

3.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Looking for Immediate Joiner Only Job Title: Azure DevOps Architect Experience: 3 to 8 Years Location: Noida / Bangalore / Gurugram Job Type: Full-Time Work Mode: 4 Days WFO Job Summary: We are seeking a skilled and motivated DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience with Azure DevOps, Kubernetes, Docker, and Terraform, and a passion for automation, scalability, and performance optimization. You will be responsible for managing CI/CD pipelines, deploying infrastructure as code, and supporting the overall DevOps culture and best practices within the organization. Key Responsibilities: Design, implement, and manage scalable CI/CD pipelines using Azure DevOps. Build and manage containerized applications using Docker and Kubernetes. Develop and maintain infrastructure using Terraform (IaC – Infrastructure as Code). Monitor system reliability, availability, and performance using observability tools. Collaborate with software developers, QA, and IT teams to improve deployment processes and system performance. Troubleshoot and resolve issues in development, test, and production environments. Automate manual tasks and continuously improve DevOps workflows. Ensure security and compliance best practices are integrated into the DevOps processes. Required Skills & Qualifications: 3 to 8 years of experience in a DevOps or infrastructure engineering role. Strong hands-on experience with Azure DevOps for source control, build, release pipelines. Proficiency in Docker and Kubernetes for container orchestration and deployment. Expertise in Terraform for infrastructure provisioning and management. Solid understanding of CI/CD concepts, automation, and DevOps principles. Experience with scripting languages such as PowerShell, Bash, or Python. Knowledge of cloud platforms (Azure preferred; AWS/GCP is a plus). Strong analytical, debugging, and problem-solving skills. Good communication skills and ability to work collaboratively in a fast-paced environment. Experience with configuration management tools like Ansible or Chef. Manage and operate Ansible Tower/AWX to schedule, track, and orchestrate automation workflows. Exposure to monitoring/logging tools (e.g., Prometheus, Grafana, ELK Stack). Familiarity with GitOps workflows and container security best practices. Experience with microservices-based architecture and cloud-native applications. Experience with MS SQL Server (administration, automation, and performance tuning). Show more Show less

Posted 1 month ago

Apply

3.0 - 8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Looking for Immediate Joiner Only Job Title: Azure DevOps Architect Experience: 3 to 8 Years Location: Noida / Bangalore / Gurugram Job Type: Full-Time Work Mode: 4 Days WFO Job Summary: We are seeking a skilled and motivated DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience with Azure DevOps, Kubernetes, Docker, and Terraform, and a passion for automation, scalability, and performance optimization. You will be responsible for managing CI/CD pipelines, deploying infrastructure as code, and supporting the overall DevOps culture and best practices within the organization. Key Responsibilities: Design, implement, and manage scalable CI/CD pipelines using Azure DevOps. Build and manage containerized applications using Docker and Kubernetes. Develop and maintain infrastructure using Terraform (IaC – Infrastructure as Code). Monitor system reliability, availability, and performance using observability tools. Collaborate with software developers, QA, and IT teams to improve deployment processes and system performance. Troubleshoot and resolve issues in development, test, and production environments. Automate manual tasks and continuously improve DevOps workflows. Ensure security and compliance best practices are integrated into the DevOps processes. Required Skills & Qualifications: 3 to 8 years of experience in a DevOps or infrastructure engineering role. Strong hands-on experience with Azure DevOps for source control, build, release pipelines. Proficiency in Docker and Kubernetes for container orchestration and deployment. Expertise in Terraform for infrastructure provisioning and management. Solid understanding of CI/CD concepts, automation, and DevOps principles. Experience with scripting languages such as PowerShell, Bash, or Python. Knowledge of cloud platforms (Azure preferred; AWS/GCP is a plus). Strong analytical, debugging, and problem-solving skills. Good communication skills and ability to work collaboratively in a fast-paced environment. Experience with configuration management tools like Ansible or Chef. Manage and operate Ansible Tower/AWX to schedule, track, and orchestrate automation workflows. Exposure to monitoring/logging tools (e.g., Prometheus, Grafana, ELK Stack). Familiarity with GitOps workflows and container security best practices. Experience with microservices-based architecture and cloud-native applications. Experience with MS SQL Server (administration, automation, and performance tuning). Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Do you have mobile applications installed on your devices? If so, chances are you've likely encountered our products. Ready to redefine the future of mobile experiences? The Adobe Experience Cloud Mobile team is integral to the Adobe Journey Optimizer and Adobe Experience Platform, tailoring personalized, multi-channel customer journeys and campaigns with unified real-time customer data. Empowering businesses to deliver seamless, personalized experiences across channels is our focus. We're looking for a Software Engineer who is hardworking, eager to learn new technologies, and ready to contribute to building scalable, performant services for large enterprises. Your role involves designing, developing, testing, and maintaining high-performance systems in multi-cloud/region environments. Join us in shaping the digital experiences of tomorrow and making a significant impact in an ambitious and rewarding environment. What You'll Do Participate in all aspects of service development activities including design, prioritisation, coding, code review, testing, bug fixing, and deployment. Implement and maintain robust monitoring, alerting, and incident response to ensure the highest level of uptime and Quality of Service to customers through operational excellence. Participate in incident response efforts during significant impact events, and contribute to after-action investigations, reviews, and indicated improvements actions. Identify and address performance bottlenecks. Look for ways to continually improve the product and process. Build and maintain detailed documentation for software architecture, design, and implementation. Develop and evolve our test automation infrastructure to increase scale and velocity. Ensure quality around services and end-to-end experience of our products. Collaborate with multi-functional professionals (UI/SDK developers, product managers, Design, etc.) to resolve solutions. Participate in story mapping, daily stand-ups, retrospectives, and sprint planning/demos on a two-week cadence. Work independently on delivering sophisticated functionality. Fast prototype of ideas and concepts and research recent trends and technologies. Communicate clearly with the team and management to define & achieve goals. Mentor and grow junior team members. What you will need to succeed: B.S. in Computer Science or equivalent engineering degree 7+ years of experience crafting and developing web or software applications Strong communication and teamwork skills - building positive relationships with internal and external customers. Dedication to team-work, self-organization, and continuous improvement Proven experience in backend development, with expertise in languages such as Java, Node.js or Python. Experience in running cloud infrastructure, including hands-on experience with AWS or Azure, Kubernetes, GitOps, Terraform, Docker, CI/CD Experience in setting up SLAs/SLOs/SLIs for key services and establishing the monitoring around them Experience in writing functional/integration/performance tests and test frameworks Experience with both SQL and NoSQL Experience with Kafka and Zookeeper is a plus Experience with Mobile Application development is a plus Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies