Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 years
40 Lacs
Kolkata, West Bengal, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Bhubaneswar, Odisha, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Cuttack, Odisha, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Cuttack, Odisha, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Bhubaneswar, Odisha, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Guwahati, Assam, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Guwahati, Assam, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Jamshedpur, Jharkhand, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Raipur, Chhattisgarh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Jamshedpur, Jharkhand, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Raipur, Chhattisgarh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Amritsar, Punjab, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Amritsar, Punjab, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Key Responsibilities: Design, implement, and manage CI/CD pipelines for rapid and reliable product delivery Automate infrastructure provisioning using tools like Terraform, CloudFormation, or Ansible Monitor and maintain system performance, reliability, and scalability Manage cloud infrastructure (AWS, Azure, or GCP) with a focus on cost, performance, and security Implement and maintain logging, monitoring, and alerting solutions (e.g., Prometheus, Grafana, ELK, Datadog) Ensure infrastructure security best practices including secrets management, access controls, and compliance Collaborate with development teams to ensure DevOps best practices are followed across the lifecycle Troubleshoot production issues and lead root cause analysis Support containerization and orchestration using Docker and Kubernetes Requirements : Bachelor’s degree in Computer Science, Engineering, or a related field 5+ years of experience in a DevOps, SRE, or similar role in a product-focused environment Proficient with CI/CD tools such as Jenkins, GitLab CI, CircleCI, etc. Strong experience with AWS, Azure, or Google Cloud Hands-on experience with infrastructure-as-code (Terraform, Ansible, etc.) Solid understanding of containerization (Docker) and orchestration (Kubernetes) Experience with scripting languages (Bash, Python, etc.) Knowledge of monitoring/logging tools like ELK, Prometheus, Grafana, or Datadog Strong communication and collaboration skills Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
This role is for one of the Weekday's clients Salary range: Rs 1200000 - Rs 2400000 (ie INR 12-24 LPA) Min Experience: 6 years Location: Hyderabad JobType: full-time We are looking for a seasoned Azure DevOps Engineer to lead the design, implementation, and management of DevOps practices within the Microsoft Azure ecosystem. The ideal candidate will bring deep expertise in automation, CI/CD pipelines, infrastructure as code (IaC), cloud-native tools, and security best practices. This position will collaborate closely with cross-functional teams to drive efficient, secure, and scalable DevOps workflows. Requirements Key Responsibilities: DevOps & CI/CD Implementation Build and maintain scalable CI/CD pipelines using Azure DevOps, GitHub Actions, or Jenkins. Automate software build, testing, and deployment processes to improve release cycles. Integrate automated testing, security scanning, and code quality checks into the pipeline. Infrastructure as Code (IaC) & Cloud Automation Develop and maintain IaC templates using Terraform, Bicep, or ARM templates. Automate infrastructure provisioning, scaling, and monitoring across Azure environments. Ensure cloud cost optimization and resource efficiency. Monitoring, Logging & Security Configure monitoring tools like Azure Monitor, App Insights, and Log Analytics. Apply Azure security best practices in CI/CD workflows and cloud architecture. Implement RBAC, Key Vault usage, and ensure policy and compliance adherence. Collaboration & Continuous Improvement Work with development, QA, and IT teams to enhance DevOps processes and workflows. Identify and resolve bottlenecks in deployment and infrastructure automation. Stay informed about industry trends and the latest features in Azure DevOps and IaC tooling. Required Skills & Experience: 5-7 years of hands-on experience in Azure DevOps and cloud automation Strong knowledge of: Azure DevOps Services (Pipelines, Repos, Boards, Artifacts, Test Plans) CI/CD tools: YAML Pipelines, GitHub Actions, Jenkins Version control: Git (Azure Repos, GitHub, Bitbucket) IaC: Terraform, Bicep, ARM templates Containerization & Orchestration: Docker, Kubernetes (AKS) Monitoring: Azure Monitor, App Insights, Prometheus, Grafana Security: Azure Security Center, RBAC, Key Vault, Compliance Policy Management Familiarity with configuration management tools like Ansible, Puppet, or Chef (optional) Strong analytical and troubleshooting skills Excellent communication skills and ability to work in Agile/Scrum environments Preferred Certifications: Microsoft Certified: Azure DevOps Engineer Expert (AZ-400) Microsoft Certified: Azure Administrator Associate (AZ-104) Certified Kubernetes Administrator (CKA) - optional Skills: Azure | DevOps | CI/CD | GitHub Actions | Terraform | Infrastructure as Code | Kubernetes | Docker | Monitoring | Cloud Security Show more Show less
Posted 1 day ago
0 years
0 Lacs
India
On-site
OpenShift and Kubernetes Cluster Management: Installation, configuration, and maintenance of OpenShift clusters, including control plane, worker nodes, and networking. Containerized Application Deployment: Deploying and managing containerized applications within OpenShift, including image building, registry management, and application lifecycle management. CI/CD Pipeline Implementation: Setting up and managing continuous integration and continuous delivery (CI/CD) pipelines for automating application deployments. Infrastructure-as-Code (IaC): Using tools like Ansible, Terraform, or similar to automate infrastructure provisioning and management. Troubleshooting and Performance Optimization: Diagnosing and resolving issues related to OpenShift performance, stability, and security. Security: Implementing and enforcing security best practices for OpenShift clusters and containerized applications. Automation: Developing and maintaining automation scripts to streamline OpenShift operations. Monitoring and Logging: Setting up and maintaining monitoring and logging systems to track OpenShift cluster and application health. Skills and Experience: OpenShift and Kubernetes: Extensive hands-on experience with OpenShift, including installation, configuration, troubleshooting, and optimization. Containerization: Understanding of containerization technologies (e.g., Docker), container registries, and container orchestration. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are looking for a Service Writer to join our team and act as our liaison for customers to address their vehicle repair needs. A Service Writers responsibilities include documenting the repairs needed and scheduling appropriate technicians for each job in our computer system. Ultimately, you will ensure that the needs of our customers are met, coordinate transactions and estimate both time costs to ensure everything runs smoothly for our customers. Responsibilities Develop cost estimates, logging needed parts and the time needed for repairs Schedule the most appropriate Service Technician for each job Convey all necessary information regarding costs, parts, work and Technicians to customers Call the customer to arrange appointments Meet with customers to discuss their requirements and relay those requirements to the Service Technicians Contact customers in the case of additional work to relay the details and extra costs Enter the details of repair jobs on the companys network and prepare repair instructions This job is provided by Shine.com Show more Show less
Posted 1 day ago
0 years
0 Lacs
Greater Kolkata Area
On-site
We are looking for a Service Writer to join our team and act as our liaison for customers to address their vehicle repair needs. A Service Writers responsibilities include documenting the repairs needed and scheduling appropriate technicians for each job in our computer system. Ultimately, you will ensure that the needs of our customers are met, coordinate transactions and estimate both time costs to ensure everything runs smoothly for our customers. Responsibilities Develop cost estimates, logging needed parts and the time needed for repairs Schedule the most appropriate Service Technician for each job Convey all necessary information regarding costs, parts, work and Technicians to customers Call the customer to arrange appointments Meet with customers to discuss their requirements and relay those requirements to the Service Technicians Contact customers in the case of additional work to relay the details and extra costs Enter the details of repair jobs on the companys network and prepare repair instructions This job is provided by Shine.com Show more Show less
Posted 1 day ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Requirements: Technical Skills: Proficiency in OpenShift, Argo CD, Helm charts, and shared libraries. Experience with CI/CD tools like Jenkins, GitLab CI/CD, or similar platforms. Experience with Platform as a Service using tools like AWS Elastic Beanstalk, Google Cloud App Engine, Azure App Service, or Heroku. Familiarity with monitoring and logging tools such as Prometheus, Grafana, ELK Stack, or similar platforms. Strong understanding of Kubernetes concepts and best practices. Soft Skills: Excellent problem-solving and troubleshooting skills. Strong communication and collaboration abilities. Ability to work independently and as part of a team. Continuous learning mindset and willingness to stay updated with the latest DevOps trends and technologies. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
RPA - Developer (Automation Anywhere) Experience-6 to 10yrs Location- Pan India Design, code, test and deploy automation workflows using AA - Verifying and leveraging appropriate AA components. Provide solution designs to customers throughout the deployment during POCs and project implementation phases. Make changes to the robot code during implementation as needed. Responsible for the overall testing cycles - Deliver technical artifacts, demos and provide necessary support for new/existing customers. Deploy RPA components including bots, robots, development tools, code repositories and logging tools. Design and development using latest RPA Versions (AA), policy and rules based on business requirement. Perform code review and assist developers in overcoming technical roadblocks. Support full life cycle implementation of RPA program including RPA Development, QA, Integration and Production deployment. Developing knowledge, understanding, and experience managing applications development and the employment of best practice guidelines throughout the software development life cycle. Manage day-to-day system development, implementation and configuration activities of RPA. As part of the career progress should be able to do the following eventually. Work with and business owners and architects in identifying the automation opportunities. Be a highly driven, autonomous, resilient and team player with a strong work ethic- Strong in requirement gathering and analysis (ability to work with a structured and methodical approach combined with an inquiring mind)- Develops RPA Prototypes and Proof of Concepts. Prepare PDD/SDD (Process/Solution Design Documents) for identified Business processes. Responsible for technical design, build and deployment of End to End Automation of business processes. Build RPA bots on the said platform as per the standards applicable. Should aim to producing top calls RPA bots handling errors, exceptions and success path scenarios. Ensure estimation tracker is created and adhere to the said standards. Publish day to progress reports to the Manager. Needs to conduct peer reviews, code reviews and buddy sit new developers. Requirements: Have Strong Automation focus with sound technical knowledge in Automation Anywhere . Degree in Computer Science or relevant experience. Proven experience as Developer in Automation Anywhere - 1 to 3 yrs. Advanced and Master Developer Certification in Automation Anywhere preferably Experience in automation anywhere- At least one year- Mandatory. Very good knowledge of Automation Anywhere products, its architecture and its eco system (Discovery Bot, Control Room, Runner, Bot Store, Bot creator, IQ Bot etc. ). Good working experience on AA automations like Web, Email, PDF, API, MS Office, IQ Bot - Mandatory. Experience with Analysis, Development and Deployment, and System Testing, including UAT and Bug fixes. Strong Problem-Solving and Analytical Skills. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for a Service Writer to join our team and act as our liaison for customers to address their vehicle repair needs. A Service Writers responsibilities include documenting the repairs needed and scheduling appropriate technicians for each job in our computer system. Ultimately, you will ensure that the needs of our customers are met, coordinate transactions and estimate both time costs to ensure everything runs smoothly for our customers. Responsibilities Develop cost estimates, logging needed parts and the time needed for repairs Schedule the most appropriate Service Technician for each job Convey all necessary information regarding costs, parts, work and Technicians to customers Call the customer to arrange appointments Meet with customers to discuss their requirements and relay those requirements to the Service Technicians Contact customers in the case of additional work to relay the details and extra costs Enter the details of repair jobs on the companys network and prepare repair instructions This job is provided by Shine.com Show more Show less
Posted 1 day ago
0 years
0 Lacs
Tikamgarh, Madhya Pradesh, India
On-site
We are looking for a Service Writer to join our team and act as our liaison for customers to address their vehicle repair needs. A Service Writers responsibilities include documenting the repairs needed and scheduling appropriate technicians for each job in our computer system. Ultimately, you will ensure that the needs of our customers are met, coordinate transactions and estimate both time costs to ensure everything runs smoothly for our customers. Responsibilities Develop cost estimates, logging needed parts and the time needed for repairs Schedule the most appropriate Service Technician for each job Convey all necessary information regarding costs, parts, work and Technicians to customers Call the customer to arrange appointments Meet with customers to discuss their requirements and relay those requirements to the Service Technicians Contact customers in the case of additional work to relay the details and extra costs Enter the details of repair jobs on the companys network and prepare repair instructions This job is provided by Shine.com Show more Show less
Posted 1 day ago
0 years
0 Lacs
Delhi, India
On-site
We are looking for a Service Writer to join our team and act as our liaison for customers to address their vehicle repair needs. A Service Writers responsibilities include documenting the repairs needed and scheduling appropriate technicians for each job in our computer system. Ultimately, you will ensure that the needs of our customers are met, coordinate transactions and estimate both time costs to ensure everything runs smoothly for our customers. Responsibilities Develop cost estimates, logging needed parts and the time needed for repairs Schedule the most appropriate Service Technician for each job Convey all necessary information regarding costs, parts, work and Technicians to customers Call the customer to arrange appointments Meet with customers to discuss their requirements and relay those requirements to the Service Technicians Contact customers in the case of additional work to relay the details and extra costs Enter the details of repair jobs on the companys network and prepare repair instructions This job is provided by Shine.com Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The logging job market in India is vibrant and offers a wide range of opportunities for job seekers interested in this field. Logging professionals are in demand across various industries such as IT, construction, forestry, and environmental management. If you are considering a career in logging, this article will provide you with valuable insights into the job market, salary range, career progression, related skills, and common interview questions.
These cities are known for their thriving industries where logging professionals are actively recruited.
The average salary range for logging professionals in India varies based on experience and expertise. Entry-level positions typically start at INR 3-5 lakhs per annum, while experienced professionals can earn upwards of INR 10-15 lakhs per annum.
A typical career path in logging may include roles such as Logging Engineer, Logging Supervisor, Logging Manager, and Logging Director. Professionals may progress from entry-level positions to more senior roles such as Lead Logging Engineer or Logging Consultant.
In addition to logging expertise, employers often look for professionals with skills such as data analysis, problem-solving, project management, and communication skills. Knowledge of industry-specific software and tools may also be beneficial.
As you embark on your journey to explore logging jobs in India, remember to prepare thoroughly for interviews by honing your technical skills and understanding industry best practices. With the right preparation and confidence, you can land a rewarding career in logging that aligns with your professional goals. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2