Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 years
40 Lacs
Raipur, Chhattisgarh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Jamshedpur, Jharkhand, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Raipur, Chhattisgarh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Amritsar, Punjab, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Amritsar, Punjab, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Responsibilities: Design, implement, and manage CI/CD pipelines for rapid and reliable product delivery Automate infrastructure provisioning using tools like Terraform, CloudFormation, or Ansible Monitor and maintain system performance, reliability, and scalability Manage cloud infrastructure (AWS, Azure, or GCP) with a focus on cost, performance, and security Implement and maintain logging, monitoring, and alerting solutions (e.g., Prometheus, Grafana, ELK, Datadog) Ensure infrastructure security best practices including secrets management, access controls, and compliance Collaborate with development teams to ensure DevOps best practices are followed across the lifecycle Troubleshoot production issues and lead root cause analysis Support containerization and orchestration using Docker and Kubernetes Requirements : Bachelor’s degree in Computer Science, Engineering, or a related field 5+ years of experience in a DevOps, SRE, or similar role in a product-focused environment Proficient with CI/CD tools such as Jenkins, GitLab CI, CircleCI, etc. Strong experience with AWS, Azure, or Google Cloud Hands-on experience with infrastructure-as-code (Terraform, Ansible, etc.) Solid understanding of containerization (Docker) and orchestration (Kubernetes) Experience with scripting languages (Bash, Python, etc.) Knowledge of monitoring/logging tools like ELK, Prometheus, Grafana, or Datadog Strong communication and collaboration skills Show more Show less
Posted 2 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 1200000 - Rs 2400000 (ie INR 12-24 LPA) Min Experience: 6 years Location: Hyderabad JobType: full-time We are looking for a seasoned Azure DevOps Engineer to lead the design, implementation, and management of DevOps practices within the Microsoft Azure ecosystem. The ideal candidate will bring deep expertise in automation, CI/CD pipelines, infrastructure as code (IaC), cloud-native tools, and security best practices. This position will collaborate closely with cross-functional teams to drive efficient, secure, and scalable DevOps workflows. Requirements Key Responsibilities: DevOps & CI/CD Implementation Build and maintain scalable CI/CD pipelines using Azure DevOps, GitHub Actions, or Jenkins. Automate software build, testing, and deployment processes to improve release cycles. Integrate automated testing, security scanning, and code quality checks into the pipeline. Infrastructure as Code (IaC) & Cloud Automation Develop and maintain IaC templates using Terraform, Bicep, or ARM templates. Automate infrastructure provisioning, scaling, and monitoring across Azure environments. Ensure cloud cost optimization and resource efficiency. Monitoring, Logging & Security Configure monitoring tools like Azure Monitor, App Insights, and Log Analytics. Apply Azure security best practices in CI/CD workflows and cloud architecture. Implement RBAC, Key Vault usage, and ensure policy and compliance adherence. Collaboration & Continuous Improvement Work with development, QA, and IT teams to enhance DevOps processes and workflows. Identify and resolve bottlenecks in deployment and infrastructure automation. Stay informed about industry trends and the latest features in Azure DevOps and IaC tooling. Required Skills & Experience: 5-7 years of hands-on experience in Azure DevOps and cloud automation Strong knowledge of: Azure DevOps Services (Pipelines, Repos, Boards, Artifacts, Test Plans) CI/CD tools: YAML Pipelines, GitHub Actions, Jenkins Version control: Git (Azure Repos, GitHub, Bitbucket) IaC: Terraform, Bicep, ARM templates Containerization & Orchestration: Docker, Kubernetes (AKS) Monitoring: Azure Monitor, App Insights, Prometheus, Grafana Security: Azure Security Center, RBAC, Key Vault, Compliance Policy Management Familiarity with configuration management tools like Ansible, Puppet, or Chef (optional) Strong analytical and troubleshooting skills Excellent communication skills and ability to work in Agile/Scrum environments Preferred Certifications: Microsoft Certified: Azure DevOps Engineer Expert (AZ-400) Microsoft Certified: Azure Administrator Associate (AZ-104) Certified Kubernetes Administrator (CKA) - optional Skills: Azure | DevOps | CI/CD | GitHub Actions | Terraform | Infrastructure as Code | Kubernetes | Docker | Monitoring | Cloud Security Show more Show less
Posted 2 days ago
0 years
0 Lacs
India
On-site
OpenShift and Kubernetes Cluster Management: Installation, configuration, and maintenance of OpenShift clusters, including control plane, worker nodes, and networking. Containerized Application Deployment: Deploying and managing containerized applications within OpenShift, including image building, registry management, and application lifecycle management. CI/CD Pipeline Implementation: Setting up and managing continuous integration and continuous delivery (CI/CD) pipelines for automating application deployments. Infrastructure-as-Code (IaC): Using tools like Ansible, Terraform, or similar to automate infrastructure provisioning and management. Troubleshooting and Performance Optimization: Diagnosing and resolving issues related to OpenShift performance, stability, and security. Security: Implementing and enforcing security best practices for OpenShift clusters and containerized applications. Automation: Developing and maintaining automation scripts to streamline OpenShift operations. Monitoring and Logging: Setting up and maintaining monitoring and logging systems to track OpenShift cluster and application health. Skills and Experience: OpenShift and Kubernetes: Extensive hands-on experience with OpenShift, including installation, configuration, troubleshooting, and optimization. Containerization: Understanding of containerization technologies (e.g., Docker), container registries, and container orchestration. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a Service Writer to join our team and act as our liaison for customers to address their vehicle repair needs. A Service Writers responsibilities include documenting the repairs needed and scheduling appropriate technicians for each job in our computer system. Ultimately, you will ensure that the needs of our customers are met, coordinate transactions and estimate both time costs to ensure everything runs smoothly for our customers. Responsibilities Develop cost estimates, logging needed parts and the time needed for repairs Schedule the most appropriate Service Technician for each job Convey all necessary information regarding costs, parts, work and Technicians to customers Call the customer to arrange appointments Meet with customers to discuss their requirements and relay those requirements to the Service Technicians Contact customers in the case of additional work to relay the details and extra costs Enter the details of repair jobs on the companys network and prepare repair instructions This job is provided by Shine.com Show more Show less
Posted 2 days ago
0 years
0 Lacs
Greater Kolkata Area
On-site
We are looking for a Service Writer to join our team and act as our liaison for customers to address their vehicle repair needs. A Service Writers responsibilities include documenting the repairs needed and scheduling appropriate technicians for each job in our computer system. Ultimately, you will ensure that the needs of our customers are met, coordinate transactions and estimate both time costs to ensure everything runs smoothly for our customers. Responsibilities Develop cost estimates, logging needed parts and the time needed for repairs Schedule the most appropriate Service Technician for each job Convey all necessary information regarding costs, parts, work and Technicians to customers Call the customer to arrange appointments Meet with customers to discuss their requirements and relay those requirements to the Service Technicians Contact customers in the case of additional work to relay the details and extra costs Enter the details of repair jobs on the companys network and prepare repair instructions This job is provided by Shine.com Show more Show less
Posted 2 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Requirements: Technical Skills: Proficiency in OpenShift, Argo CD, Helm charts, and shared libraries. Experience with CI/CD tools like Jenkins, GitLab CI/CD, or similar platforms. Experience with Platform as a Service using tools like AWS Elastic Beanstalk, Google Cloud App Engine, Azure App Service, or Heroku. Familiarity with monitoring and logging tools such as Prometheus, Grafana, ELK Stack, or similar platforms. Strong understanding of Kubernetes concepts and best practices. Soft Skills: Excellent problem-solving and troubleshooting skills. Strong communication and collaboration abilities. Ability to work independently and as part of a team. Continuous learning mindset and willingness to stay updated with the latest DevOps trends and technologies. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
RPA - Developer (Automation Anywhere) Experience-6 to 10yrs Location- Pan India Design, code, test and deploy automation workflows using AA - Verifying and leveraging appropriate AA components. Provide solution designs to customers throughout the deployment during POCs and project implementation phases. Make changes to the robot code during implementation as needed. Responsible for the overall testing cycles - Deliver technical artifacts, demos and provide necessary support for new/existing customers. Deploy RPA components including bots, robots, development tools, code repositories and logging tools. Design and development using latest RPA Versions (AA), policy and rules based on business requirement. Perform code review and assist developers in overcoming technical roadblocks. Support full life cycle implementation of RPA program including RPA Development, QA, Integration and Production deployment. Developing knowledge, understanding, and experience managing applications development and the employment of best practice guidelines throughout the software development life cycle. Manage day-to-day system development, implementation and configuration activities of RPA. As part of the career progress should be able to do the following eventually. Work with and business owners and architects in identifying the automation opportunities. Be a highly driven, autonomous, resilient and team player with a strong work ethic- Strong in requirement gathering and analysis (ability to work with a structured and methodical approach combined with an inquiring mind)- Develops RPA Prototypes and Proof of Concepts. Prepare PDD/SDD (Process/Solution Design Documents) for identified Business processes. Responsible for technical design, build and deployment of End to End Automation of business processes. Build RPA bots on the said platform as per the standards applicable. Should aim to producing top calls RPA bots handling errors, exceptions and success path scenarios. Ensure estimation tracker is created and adhere to the said standards. Publish day to progress reports to the Manager. Needs to conduct peer reviews, code reviews and buddy sit new developers. Requirements: Have Strong Automation focus with sound technical knowledge in Automation Anywhere . Degree in Computer Science or relevant experience. Proven experience as Developer in Automation Anywhere - 1 to 3 yrs. Advanced and Master Developer Certification in Automation Anywhere preferably Experience in automation anywhere- At least one year- Mandatory. Very good knowledge of Automation Anywhere products, its architecture and its eco system (Discovery Bot, Control Room, Runner, Bot Store, Bot creator, IQ Bot etc. ). Good working experience on AA automations like Web, Email, PDF, API, MS Office, IQ Bot - Mandatory. Experience with Analysis, Development and Deployment, and System Testing, including UAT and Bug fixes. Strong Problem-Solving and Analytical Skills. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for a Service Writer to join our team and act as our liaison for customers to address their vehicle repair needs. A Service Writers responsibilities include documenting the repairs needed and scheduling appropriate technicians for each job in our computer system. Ultimately, you will ensure that the needs of our customers are met, coordinate transactions and estimate both time costs to ensure everything runs smoothly for our customers. Responsibilities Develop cost estimates, logging needed parts and the time needed for repairs Schedule the most appropriate Service Technician for each job Convey all necessary information regarding costs, parts, work and Technicians to customers Call the customer to arrange appointments Meet with customers to discuss their requirements and relay those requirements to the Service Technicians Contact customers in the case of additional work to relay the details and extra costs Enter the details of repair jobs on the companys network and prepare repair instructions This job is provided by Shine.com Show more Show less
Posted 2 days ago
0 years
0 Lacs
Tikamgarh, Madhya Pradesh, India
On-site
We are looking for a Service Writer to join our team and act as our liaison for customers to address their vehicle repair needs. A Service Writers responsibilities include documenting the repairs needed and scheduling appropriate technicians for each job in our computer system. Ultimately, you will ensure that the needs of our customers are met, coordinate transactions and estimate both time costs to ensure everything runs smoothly for our customers. Responsibilities Develop cost estimates, logging needed parts and the time needed for repairs Schedule the most appropriate Service Technician for each job Convey all necessary information regarding costs, parts, work and Technicians to customers Call the customer to arrange appointments Meet with customers to discuss their requirements and relay those requirements to the Service Technicians Contact customers in the case of additional work to relay the details and extra costs Enter the details of repair jobs on the companys network and prepare repair instructions This job is provided by Shine.com Show more Show less
Posted 2 days ago
0 years
0 Lacs
Delhi, India
On-site
We are looking for a Service Writer to join our team and act as our liaison for customers to address their vehicle repair needs. A Service Writers responsibilities include documenting the repairs needed and scheduling appropriate technicians for each job in our computer system. Ultimately, you will ensure that the needs of our customers are met, coordinate transactions and estimate both time costs to ensure everything runs smoothly for our customers. Responsibilities Develop cost estimates, logging needed parts and the time needed for repairs Schedule the most appropriate Service Technician for each job Convey all necessary information regarding costs, parts, work and Technicians to customers Call the customer to arrange appointments Meet with customers to discuss their requirements and relay those requirements to the Service Technicians Contact customers in the case of additional work to relay the details and extra costs Enter the details of repair jobs on the companys network and prepare repair instructions This job is provided by Shine.com Show more Show less
Posted 2 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We are looking for a Service Writer to join our team and act as our liaison for customers to address their vehicle repair needs. A Service Writers responsibilities include documenting the repairs needed and scheduling appropriate technicians for each job in our computer system. Ultimately, you will ensure that the needs of our customers are met, coordinate transactions and estimate both time costs to ensure everything runs smoothly for our customers. Responsibilities Develop cost estimates, logging needed parts and the time needed for repairs Schedule the most appropriate Service Technician for each job Convey all necessary information regarding costs, parts, work and Technicians to customers Call the customer to arrange appointments Meet with customers to discuss their requirements and relay those requirements to the Service Technicians Contact customers in the case of additional work to relay the details and extra costs Enter the details of repair jobs on the companys network and prepare repair instructions This job is provided by Shine.com Show more Show less
Posted 2 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are looking for a Service Writer to join our team and act as our liaison for customers to address their vehicle repair needs. A Service Writers responsibilities include documenting the repairs needed and scheduling appropriate technicians for each job in our computer system. Ultimately, you will ensure that the needs of our customers are met, coordinate transactions and estimate both time costs to ensure everything runs smoothly for our customers. Responsibilities Develop cost estimates, logging needed parts and the time needed for repairs Schedule the most appropriate Service Technician for each job Convey all necessary information regarding costs, parts, work and Technicians to customers Call the customer to arrange appointments Meet with customers to discuss their requirements and relay those requirements to the Service Technicians Contact customers in the case of additional work to relay the details and extra costs Enter the details of repair jobs on the companys network and prepare repair instructions This job is provided by Shine.com Show more Show less
Posted 2 days ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role - Site Reliability Engineer SRE Location – Pune/Chennai/Hyderabad Experience – 5 - 8 years Responsibilities System Reliability and Performance Monitor and improve system reliability availability and performance through proactive measures and effective incident response Automation Develop and implement automation scripts and tools to optimize various aspects of our infrastructure including deployments monitoring and scaling Incident Management Lead incident response efforts to quickly diagnose and resolve issues minimizing impact on service delivery and ensuring robust postincident reviews Monitoring and Logging Design and maintain comprehensive monitoring and logging solutions to provide visibility into system health and performance CICD Pipeline Build and manage continuous integration and continuous deployment pipelines to ensure efficient and reliable software releases Capacity Planning Conduct capacity planning and scaling exercises to accommodate growth and ensure optimal system performance Collaborative Development Work closely with development and operations teams to promote best practices share knowledge and foster a culture of continuous improvement Security Implement security best practices to protect systems and data ensuring compliance with industry standards and regulations Documentation Create and maintain comprehensive documentation of systems processes and procedures for internal use and knowledge sharing Qualifications Experience 5 to 7 years of experience in Site Reliability Engineering DevOps or a related field Technical Skills Proficiency in scripting languages such as Python Bash or Perl experience with automation tools like Ansible Terraform or Puppet Cloud Expertise Handson experience with cloud platforms such as AWS Azure or Google Cloud strong knowledge of containerization technologies like Docker and Kubernetes Monitoring Tools Familiarity with monitoring and observability tools like Prometheus Grafana ELK Stack or Splunk CICD Proficiency Demonstrated ability to create and manage CICD pipelines using GitLab Jenkins or similar tools ProblemSolving Excellent analytical and problemsolving skills with a proactive approach to identifying and addressing issues Adaptability Ability to adapt to rapidly changing environments and new technologies while maintaining a focus on reliability and performance Kindly attach your updated resume & share the below information at Nikhil.Singh@LTIMindtree.com - Current location - Open to relocate to Pune/Chennai/Hyderabad - Current CTC - Expected CTC - Notice period (LWD if serving) - Years of Experience - Show more Show less
Posted 2 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We are seeking a highly experienced AWS Data Solution Architect to lead the design and implementation of scalable, secure, and high-performance data architectures on the AWS cloud. The ideal candidate will have a deep understanding of cloud-based data platforms, analytics, and best practices for optimizing data pipelines and storage. You will work closely with data engineers, business stakeholders, and cloud architects to deliver robust data solutions. Key Responsibilities: 1. Architecture Design and Planning: Design scalable and resilient data architectures on AWS that include data lakes, data warehouses, and real-time processing. Architect end-to-end data solutions leveraging AWS services such as S3, Redshift, RDS, DynamoDB, Glue, and Lake Formation. Develop multi-layered security frameworks for data protection and governance. 2. Data Pipeline Development: Build and optimize ETL/ELT pipelines using AWS Glue, Data Pipeline, and Lambda. Integrate data from various sources like RDBMS, NoSQL, APIs, and streaming platforms. Ensure high availability and real-time processing capabilities for mission-critical applications. 3. Data Warehousing and Analytics: Design and optimize data warehouses using Amazon Redshift or Snowflake. Implement data modeling, partitioning, and indexing for optimal performance. Create analytical models to drive business insights and data-driven decision-making. 4. Real-time Data Processing: Implement real-time data processing using AWS Kinesis, Kafka, or MSK. Architect solutions for event-driven architectures with Lambda and EventBridge. 5. Security and Compliance: Implement best practices for data security, encryption, and access control using IAM, KMS, and Lake Formation. Ensure compliance with regulatory standards like GDPR, HIPAA, and CCPA. 6. Monitoring and Optimization: Monitor performance, optimize costs, and enhance the reliability of data pipelines and storage. Set up observability with AWS CloudWatch, X-Ray, and CloudTrail. Troubleshoot issues and ensure business continuity with automated recovery mechanisms. 7. Documentation and Best Practices: Create detailed architecture diagrams, data flow mappings, and documentation for reference. Establish best practices for data governance, architecture design, and deployment. 8. Collaboration and Leadership: Work closely with data engineers, application developers, and DevOps teams to ensure seamless integration. Act as a technical advisor to business stakeholders for cloud-based data solution Regulatory Compliance Reporting Experience The architect should be able to resolve complex challenges due to the strict regulatory environment in India and the need to balance compliance with operational efficiency. Key complexities include: a) Building data segregation and Access Control capability: This requires in-depth understanding of data privacy laws, Amazon’s global data architecture, and the ability to design systems that can segregate and control access to sensitive payment data without compromising functionality. b) Integrating diverse data sources into Secure Redshift Cluster (SRC) data which involves working with multiple teams and systems, each with its own data structure and transfer protocols. c) Instrumenting additional UPI data elements collaborating with UPI tech teams and a deep understanding of UPI transaction flows to ensure accurate and compliant data capture. d) Automating Law Enforcement Agency (LEA) and Financial Intelligence Unit (FIU) reporting: This involves creating secure, automated pipelines for highly sensitive data, ensuring accuracy and timeliness while meeting strict regulatory requirements. The Architect will be extending from India-specific solutions to serving worldwide markets. Complexities include: a) Designing a unified data storage and compute architecture requiring harmonizing diverse tech stacks and data logging practices across multiple countries while considering data sovereignty laws and cost implications of cross-border data transfers. b) Setting up comprehensive datamarts covering metrics and dimensions involving standardizing metric definitions across markets, ensuring data consistency, and designing for scalability to accommodate future growth. c) Enabling customer segmentation across power-up programs that requires integrating data from diverse programs while maintaining data integrity and respecting country-specific data usage regulations. d) Managing time zone challenges :Synchronizing data across multiple time zones requires innovative solutions to ensure timely data availability without compromising completeness or accuracy. e) Navigating regulatory complexities: Designing systems that comply with varying and evolving data regulations across multiple countries while maintaining operational efficiency and flexibility for future changes. Show more Show less
Posted 2 days ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role: We are seeking an accomplished and visionary DevOps Leader to spearhead our entire DevOps function. In this pivotal role, you will be the strategic architect and technical authority, responsible for guiding the evolution and optimization of our infrastructure, operations, and deployment practices. You will lead the DevOps team, ensuring our systems are highly available, scalable, secure, fault-tolerant, and cost-efficient. This position demands a blend of deep technical expertise, exceptional leadership, and a commitment to fostering a culture of operational excellence across the engineering organization. Key Responsibilities: Strategic DevOps Leadership & Architecture: Lead the entire DevOps organization, taking full ownership of the architecture and technical leadership of the entire DevOps infrastructure and team. Define, communicate, and execute the long-term DevOps strategy, roadmap, and vision, aligning it directly with broader business and engineering objectives. Drive the adoption of cutting-edge practices in infrastructure as code, continuous integration/delivery, and site reliability engineering. Platform Operations & Reliability Engineering: Deploy, manage, and operate scalable, highly available, fault-tolerant, and cost-optimized systems in a dynamic production environment. Setup and champion the Application Monitoring Framework, establishing robust logging, alerting, and performance monitoring best practices and processes across all engineering teams. Oversee incident response, root cause analysis, and proactive measures to ensure maximum uptime and system health. Platform Security & Compliance Management: Manage Platform Security and Compliance, ensuring the entire platform consistently meets the latest security and compliance requirements. Proactively identify vulnerabilities and critical business risks within our infrastructure and applications. Collaborate strategically with engineering teams to plan and drive the timely closure of all identified security and compliance gaps. Implement and enforce security-first principles throughout the operational lifecycle. Team Leadership & Development: Recruit, mentor, coach, and develop a high-performing team of DevOps/SRE engineers, cultivating a culture of innovation, continuous learning, and shared ownership. Provide clear direction, set performance expectations, and foster career growth for team members. Technical Vendor Management & Negotiation: Manage critical vendor relationships (tech), including cloud providers, SaaS tools, and specialized services. Lead engagement with vendors during critical issues, driving accountability across defined Service Level Agreements (SLAs) to ensure optimal performance and support. Qualifications: Bachelor's or Master's degree in Computer Science, Engineering. 12+ years of progressive experience in DevOps or Infrastructure roles, with at least 5+ years in a leadership position managing and mentoring DevOps teams. Proven track record of architecting, deploying, and managing highly scalable, secure, and resilient cloud-native infrastructure (specifically AWS). Expert-level proficiency in CI/CD methodologies and tools (e.g., Jenkins, GitLab CI, ArgoCD, Spinnaker). Deep expertise with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible. Extensive experience with containerization (Docker) and container orchestration platforms (Kubernetes). Strong background in setting up and leveraging comprehensive observability stacks (monitoring, logging, tracing – e.g., Prometheus, Grafana, ELK Stack, Datadog). Demonstrated ability to implement and enforce robust security practices and manage compliance frameworks (e.g., ISO 27001, SOC 2). Strong experience in vendor management, including contract negotiations and driving SLA adherence. Exceptional leadership, strategic thinking, and problem-solving abilities. Excellent communication, interpersonal, and stakeholder management skills, with the ability to influence technical and non-technical audiences. Preferred Qualifications: Experience in a high-growth, fast-paced SaaS or logistics technology environment. Relevant industry certifications (e.g., AWS Certified DevOps Engineer - Professional, Certified Kubernetes Administrator). Experience with advanced networking concepts, distributed systems, and microservices architectures. Proficiency in programming/scripting languages such as Python, Go, or Java for automation and tooling development Show more Show less
Posted 2 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are looking for a Service Writer to join our team and act as our liaison for customers to address their vehicle repair needs. A Service Writers responsibilities include documenting the repairs needed and scheduling appropriate technicians for each job in our computer system. Ultimately, you will ensure that the needs of our customers are met, coordinate transactions and estimate both time costs to ensure everything runs smoothly for our customers. Responsibilities Develop cost estimates, logging needed parts and the time needed for repairs Schedule the most appropriate Service Technician for each job Convey all necessary information regarding costs, parts, work and Technicians to customers Call the customer to arrange appointments Meet with customers to discuss their requirements and relay those requirements to the Service Technicians Contact customers in the case of additional work to relay the details and extra costs Enter the details of repair jobs on the companys network and prepare repair instructions This job is provided by Shine.com Show more Show less
Posted 2 days ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are looking for a Service Writer to join our team and act as our liaison for customers to address their vehicle repair needs. A Service Writers responsibilities include documenting the repairs needed and scheduling appropriate technicians for each job in our computer system. Ultimately, you will ensure that the needs of our customers are met, coordinate transactions and estimate both time costs to ensure everything runs smoothly for our customers. Responsibilities Develop cost estimates, logging needed parts and the time needed for repairs Schedule the most appropriate Service Technician for each job Convey all necessary information regarding costs, parts, work and Technicians to customers Call the customer to arrange appointments Meet with customers to discuss their requirements and relay those requirements to the Service Technicians Contact customers in the case of additional work to relay the details and extra costs Enter the details of repair jobs on the companys network and prepare repair instructions This job is provided by Shine.com Show more Show less
Posted 2 days ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
AWS -Software Engineer III Yrs Of Exp : 9-13 Location : Chennai (Work from office ) WORK EXPERIENCE- Experience working on RESTful Web Services, Microservices, Java Spring boot, ReactJS. Experience building Web/Mobile applications both UI and Backend (Fullstack developer). 6+ Years’ of consulting experience in AWS – Application setup, monitoring, setting up alerts, logging, tuning and so forth. Able work as a junior level AWS Architect. Exposure to other cloud platforms like Azure, SAP BTP. Experience in working environments using Agile (SCRUM) and Test-Driven Development (TDD) methodologies. Experience with building CI/CD pipelines using Gitlab (DevOps role). CERTIFICATIONS – Nice to have at least one AWS certification. Show more Show less
Posted 2 days ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Azure -Software Engineer III Yrs Of Exp : 9-13 Location : Chennai (Work from office ) WORK EXPERIENCE- Experience working on RESTful Web Services, Microservices, .Net, ReactJS, Node.js (Secondary skill). Experience building Web/Mobile applications both UI and Backend (Fullstack developer). 6+ Years’ of consulting experience in Microsoft Azure – Application setup, monitoring, setting up alerts, logging, tuning and so forth. Able work as a junior level Azure Architect. Exposure to other cloud platforms like AWS, SAP BTP. Experience in working environments using Agile (SCRUM) and Test-Driven Development (TDD) methodologies. Experience with building CI/CD pipelines using Gitlab (DevOps role). CERTIFICATIONS – Nice to have at least one Microsoft Azure certification. Show more Show less
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The logging job market in India is vibrant and offers a wide range of opportunities for job seekers interested in this field. Logging professionals are in demand across various industries such as IT, construction, forestry, and environmental management. If you are considering a career in logging, this article will provide you with valuable insights into the job market, salary range, career progression, related skills, and common interview questions.
These cities are known for their thriving industries where logging professionals are actively recruited.
The average salary range for logging professionals in India varies based on experience and expertise. Entry-level positions typically start at INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10-15 lakhs per annum.
A typical career path in logging may include roles such as Logging Engineer, Logging Supervisor, Logging Manager, and Logging Director. Professionals may progress from entry-level positions to more senior roles such as Lead Logging Engineer or Logging Consultant.
In addition to logging expertise, employers often look for professionals with skills such as data analysis, problem-solving, project management, and communication skills. Knowledge of industry-specific software and tools may also be beneficial.
As you embark on your journey to explore logging jobs in India, remember to prepare thoroughly for interviews by honing your technical skills and understanding industry best practices. With the right preparation and confidence, you can land a rewarding career in logging that aligns with your professional goals. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2