Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 years
40 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep C++ expertise to review evaluations completed by data annotators assessing AI-generated Java code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated C++ code. Assess if the C++ code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in C++ development, QA, or code review. Strong knowledge of C++ syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $22 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your C++ expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
0.0 - 5.0 years
0 Lacs
Pimpri-Chinchwad, Maharashtra
On-site
Job Title: Senior Associate – Accounting & Taxation Location: Pimpri-Chinchwad, Pune (In-office) Experience Required: 3–5 Years (CA Firm Experience Preferred) Job Type: Full-time Roles and Responsibilities We are seeking a skilled and detail-oriented professional to join our team, with a strong foundation in accounting, taxation, and compliance. The ideal candidate should have a thorough understanding of Indian accounting standards, audit processes, and taxation laws. Key Responsibilities: Accounting & Bookkeeping: Perform monthly accounting tasks and review books of accounts for accuracy and completeness. Auditing: Independently handle statutory audits, tax audits, and GST audits, including the preparation of annual returns (GSTR 9 & 9C). Financial Reporting: Prepare and review financial statements such as the Balance Sheet, Profit & Loss Account, and Cash Flow Statement. Tax Computations: Accurately draft and review income tax computations. Regulatory Compliance: Prepare and respond to notices from various government departments in a timely and professional manner. Tax Filing: Manage GST, TDS, Professional Tax, and Income Tax return filings efficiently. Payroll: Oversee and finalize monthly payroll processing for clients. Reconciliations: Conduct reconciliations related to GST and Income Tax (TDS). Accounting Standards: Apply a solid understanding of Indian Accounting Standards (Ind AS) and corporate taxation in daily tasks. Key Requirements 3–5 years of hands-on experience in accounting and taxation, preferably in a Chartered Accountant firm. Strong command of Indian tax laws, GST, TDS, and Income Tax regulations. Proficiency in advanced Excel and accounting software. Excellent communication (verbal and written) and documentation skills. Strong organizational skills and attention to detail. Ability to handle multiple projects, prioritize tasks, and meet deadlines. Self-motivated with a proactive learning attitude in a fast-paced environment. What We Offer Time Off: Last Saturdays off each month. Work Environment: Informal dress code and a friendly, growth-driven atmosphere. Recognition: Certificate of employment and letter of recommendation upon successful completion. Team Culture: Collaborative work environment that values creativity, innovation, and mutual respect. Leadership: Supportive and approachable management with open-door communication. Celebrations: Team celebrations for milestones, birthdays, and achievements. Job Types: Full-time, Fresher, Internship Contract length: 12 months Pay: ₹30,000.00 - ₹35,000.00 per month Benefits: Paid sick time Schedule: Day shift Monday to Friday Application Question(s): How much experience do you have in a firm? Location: Pimpri-Chinchwad, Maharashtra (Required) Work Location: In person
Posted 1 day ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep C# expertise to review evaluations completed by data annotators assessing AI-generated C# code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated C# code. Assess if the C# code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in C# development, QA, or code review. Strong knowledge of C# syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $22 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your C# expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep Java expertise to review evaluations completed by data annotators assessing AI-generated Java code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated Java code. Assess if the Java code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in Java development, QA, or code review. Strong knowledge of Java syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $18 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your Java expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Us: At TELUS Digital, we enable customer experience innovation through spirited teamwork, agile thinking, and a caring culture that puts customers first. TELUS Digital is the global arm of TELUS Corporation, one of the largest telecommunications service providers in Canada. We deliver contact center and business process outsourcing (BPO) solutions to some of the world's largest corporations in the consumer electronics, finance, telecommunications and utilities sectors. With global call center delivery capabilities, our multi-shore, multi-language programs offer safe, secure infrastructure, value-based pricing, skills-based resources and exceptional customer service - all backed by TELUS, our multi-billion dollar telecommunications parent. Description and Requirements: To be successful, this person must possess a strong understanding of the wide array of AppSec and InfoSec tools, protocols, and best practices applicable to application platforms, including their infrastructure. This person must have experience maintaining team documentation, leading meetings, escalating issues, and driving teams to deliver work. The ideal person will have a minimum of 5+ years of experience in software engineering, cybersecurity, and/or cyber-audit, and will clearly express the following characteristics and competencies: Clearly defining and developing new policies, processes, training documents, and best practices. Collaborating with technical teams to improve observability. Reviewing risk findings, assigning them to fixed teams, and reporting remediation efforts and related challenges. Gathering key information for exception requests, including risk details, action plans, and remediation dependencies. Partnering with security teams to improve data quality in security tools and external reports. Hosting meetings with members of application, security, and leadership teams to communicate updates and changes to security postures. Validating submitted evidence meets requirements to resolve risks and compliance issues. Educating application teams on security subject matter. Preferred Skills & Experience: Strong verbal communication skills. Must be comfortable speaking in front of audiences including technical teams and senior leaders, including VPs. Strong written communication skills with the ability to produce quality literature and technical documentation. The ability to collaborate with technical teams to define, improve, and document procedures to meet compliance requirements. Diligence in tracking and following up on action items and inquiries across multiple efforts and teams. Strong knowledge in security standards and practices for both on-premises and AWS environments; CCSP, CISSP, or other cloud-focused application security certifications are a big plus. Familiarity with Data Center and AWS infrastructure, including data center network architectures, virtualization, containerization, and AWS products/offerings. Ability to perform analysis and tests to validate findings and remediation claims. A strong knowledge of ITIL operations and agile development practices. Experience working in a DevSecOps culture is a plus. The ability to quickly navigate matrixed environments is a must. Experience in a software engineering, delivery manager, or a project manager role is strongly desired. Equal Opportunity Employer At TELUS Digital, we are proud to be an equal opportunity employer and are committed to creating a diverse and inclusive workplace. All aspects of employment, including the decision to hire and promote, are based on applicants’ qualifications, merits, competence and performance without regard to any characteristic related to diversity. Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Chennai, Tamil Nadu, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Chennai, Tamil Nadu, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
About Company SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We’re hiring a Code Reviewer with deep Java expertise to review evaluations completed by data annotators assessing AI-generated Java code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities Review and audit annotator evaluations of AI-generated Java code. Assess if the Java code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology. Identify inaccuracies in annotator ratings or explanations. Provide constructive feedback to maintain high annotation standards. Work within Project Atlas guidelines for evaluation integrity and consistency. Required Qualifications 5–7+ years of experience in Java development, QA, or code review. Strong knowledge of Java syntax, debugging, edge cases, and testing. Comfortable using code execution environments and testing tools. Excellent written communication and documentation skills. Experience working with structured QA or annotation workflows. English proficiency at B2, C1, C2, or Native level. Preferred Qualifications Experience in AI training, LLM evaluation, or model alignment. Familiarity with annotation platforms. Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Compensation : $18 Hourly Why Join Us? Join a high-impact team working at the intersection of AI and software development. Your Java expertise will directly influence the accuracy, safety, and clarity of AI-generated code. This role offers remote flexibility, milestone-based delivery, and competitive compensation. Show more Show less
Posted 1 day ago
4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
About the Role We are looking for an experienced and highly motivated SEO - Assistant Manager to lead our organic growth initiatives across web and mobile platforms. This role requires strategic thinking, strong analytical capabilities, and deep knowledge of SEO and ASO best practices. You will be responsible for driving visibility, traffic, installs, and user engagement through effective search and app store optimization. Key Responsibilities Strategy & Planning Develop and execute comprehensive short-term and long-term SEO and ASO strategies. Lead and manage a team of SEO specialists and ensure seamless execution of projects. SEO Optimization Conduct keyword research and guide content and product teams for high-impact SEO content. Optimize on-page elements such as metadata, internal linking, and content structure. Lead off-page SEO efforts including link-building and authority development. Audit and fix technical SEO issues affecting crawlability and indexation. ASO Optimization Optimize mobile apps on Google Play Store and Apple App Store. Improve app visibility through metadata, screenshots, reviews, and A/B testing. Analyze installs, retention, and crash reports via tools like Firebase and Appsflyer. Track and enhance app ranking and user review performance. Performance Monitoring & Reporting Track keyword rankings, organic traffic, and conversion rates. Prepare regular performance reports aligned with KPIs and ROI goals. Stay updated with algorithm changes and industry trends to adjust strategies. Collaboration Work closely with PPC, content, social media, and design teams. Manage localization, content creation, and design requirements for SEO/ASO. Tools & Tech Proficiency Use tools such as Google Search Console, Ahrefs, SEMrush, Screaming Frog, App Annie, Sensor Tower, Mobile Action, and others to drive actionable insights. Requirements 3–4 years of proven experience in SEO and ASO. Solid understanding of Google algorithms, technical SEO, and app store guidelines. Experience working with large websites (10,000+ indexed pages). Strong analytical skills and a data-driven mindset. Proficiency with tools like Google Analytics, Firebase, Appsflyer, App Store Connect, Play Console. Basic knowledge of HTML/CSS. Bachelor’s degree in Marketing, Computer Science, or related field. Excellent communication and leadership skills. Show more Show less
Posted 1 day ago
10.0 - 12.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
This is for Data Center role On role of JCI (Company Payroll) As a Project Manager, your responsibilities and expectations will include the following: HOTO Review & Approval: You will review the project scope and tender in collaboration with the Manager, highlighting risks and challenges. You are expected to review cost estimates in line with the project scope and technical specifications, ensuring a complete understanding of the solution offered. Preparation of Project Schedule: You will prepare the project schedule in Microsoft Project (MSP), clearly defining the critical path and milestones, and highlighting any clearances required from the customer. System Knowledge: You must have 10-12 years of hands-on experience in Data Center projects, specifically in the installation, testing, and commissioning of CCTV, access control, BMS, and fire alarm systems. Certifications for commissioning security and fire detection systems are required. Project Management: You will bring hands-on experience in Data Center project management, including vendor management, testing, and commissioning processes. Resource and Subcontractor Deployment: It is essential to ensure that competent resources are deployed on-site to handle the project effectively. Deploy efficient and skilled subcontractors with adequate manpower to meet the project timeline. Monitoring Site Progress - Planned Vs Actual: You will review the design and construction progress with the design and project team on a weekly basis, or daily depending on the volume and complexity of the project. Conduct site walks with the project engineer to monitor site progress in line with the schedule. Quality Check and Audits: During site walks, you will check the quality of installations and ensure that audits are conducted periodically. Address any findings immediately and ensure that the project engineer does not repeat audit findings. VO Management: Create VO opportunities, including tender, non-tender, and time extension cost escalation, targeting a VO of 10-15% of the project value. Site Meetings: Participate in site meetings to raise alerts for dependencies or clearances that may impact project deliveries. Escalate issues to the next level of PMC or customer if dependencies are not cleared. Coordination with Cross-Functional Teams: Coordinate with internal stakeholders, including design, supply chain management, learning and development, quality, and finance, to ensure project deliveries are met. You must highlight to the next level in the organization if any support is required to prevent delays in project timelines. Maintain Project Cash Flow - UBR/Collection/VO: Ensure timely invoicing, accounts receivable collection, and VO management. Push for VO with the site team and maintain an account statement for each project. Project Completion and Hand Over: Conduct pre-commissioning checks before testing and commissioning, and request any technical or resource support from the manager in advance. Begin preparing operation and maintenance manuals and as-built documentation during the pre-commissio Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Coimbatore, Tamil Nadu, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Vellore, Tamil Nadu, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Madurai, Tamil Nadu, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA The Networking Managed Services Engineer (L2) is a developing engineering role, responsible for providing a managed service to clients to ensure that their IT infrastructure and systems remain operational through proactively monitoring, identifying, investigating, and resolving technical incidents and problems and restoring service to clients. The primary objective of this role is to proactively review client requests or tickets and apply technical/process knowledge to resolve them without breaching service level agreement (SLA) and focuses on second-line support for incidents and requests with a medium level of complexity. The Networking Managed Services Engineer (L2) may also contribute to / support on project work as and when required. What You'll Be Doing Key Responsibilities: Monitoring, technical and troubleshooting support and administration of firewall (FortiGate SD WAN) Ensure daily backup of Management servers and firewall. Troubleshooting access related issues due to firewall and IPS policies Prepare daily/weekly/monthly/half yearly/yearly compliance as per HSL requirement. Review monitoring alerts for the firewall for availability and performance using in-house deployed NMS tool Configure firewall/IPS/AV security policies on firewall. Modification/deletion/addition of rules/routes/policies as per requirements from HSL Provide audit evidence as and when required Assist OEM/HSL Project team in product upgrade/maintenance activities. Log analysis and reporting using native tool. Capacity management Incident management UAM and Firewall rule base review Change management process need to be followed. Service window for this engagement is 16/6 (two shifts) In absence of onsite resource, there should be an immediate replacement of the resource. Provision and configure FortiGate devices for SD-WAN functionality, including defining WAN links, VPN tunnels, and traffic shaping policies. Deploy and manage SD-WAN overlays to optimize network performance and reliability. Define and enforce traffic policies based on application types, quality of service (QoS) requirements, and security policies. Implement dynamic path selection and traffic steering rules to ensure efficient utilization of WAN links. Monitor the performance and health of SD-WAN links and devices using Fortinet management tools. Troubleshoot network connectivity issues, latency, and packet loss problems in the SD-WAN environment. Analyze traffic patterns and utilization statistics to identify potential bottlenecks and optimize network performance. Integrate security features such as firewall, intrusion prevention system (IPS), and web filtering with SD-WAN policies to ensure secure access to applications and data. Configure security policies to inspect and filter traffic at the WAN edge to protect against threats and vulnerabilities. Configure QoS policies to prioritize critical applications and traffic types over less important ones. Implement traffic shaping and bandwidth management techniques to ensure optimal performance for real-time applications like voice and video conferencing. Monitor network utilization and capacity trends to forecast future bandwidth requirements. Scale SD-WAN infrastructure to accommodate growing traffic demands and business needs. Maintain up-to-date documentation of SD-WAN configurations, policies, and procedures. Generate regular reports on network performance, uptime, and security events for management and compliance purposes. Implement changes to SD-WAN configurations following best practices and change management procedures. Coordinate with other IT teams to ensure seamless integration of SD-WAN changes with existing network infrastructure. End user support if any issue due to firewall policies. Support for DC/DR headend device for change management, daily operation,HW/SW upgrade, modification, maintenance activity and incident. Upgrade activity (hardware/software) need to be performed as per OEM recommendation for headend and branch devices. Closing of audit and VA points for headend and branch devices Support for existing inventory of fortigate appliances (Firewall, controller, AP, Analyzer) across DC,DR and branches. Coordinating and raise the case with ISP (MPLS/P2P/Internet) for link down/link flapping/high latency issue (Branch link and their hub DC/DR link) Coordinating with ISP for link configuration in the event of new link commissioning, link shifting, link bandwidth upgrade, change of service provider. (Branch link and their hub DC/DR link) Configuration of links on BGP/EIGRP/IGP and OSPF protocols Preparation of daily/monthly/quarterly link utilization report and publish to seniors Follow the change management process and generate the change ID before execution of any change Academic Qualifications and Certifications: Bachelor's degree or equivalent qualification in IT/Computing (or demonstrated equivalent work experience). Fortinet SDWAN certification or equivalent certification. Certifications relevant to the services provided (certifications carry additional weightage on a candidate’s qualification for the role). Workplace type: On-site Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today. Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Surat, Gujarat, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: AWS Q, CodeWhisperer, Gen AI, CI/CD, contenarization, Go, microservices, RESTful API, MySQL, PHP, PostgreSQL MatchMove is Looking for: As a Technical Lead (Backend ), you will play a pivotal role in shaping the engineering foundation for a robust, real-time, cross-border payment platform. You’ll be writing clean, secure, and scalable Go services powering billions in financial flows, while championing engineering excellence and thoughtful platform design. You will contribute to:: Developing and scaling distributed payment transaction systems for cross-border and domestic remittance use cases. Designing resilient microservices in Go for high-volume, low-latency transaction flows with regional compliance and localization. Owning service-level metrics such as SLA adherence, latency (p95/p99), throughput, and availability. Building API-first products with strong documentation, mocks, and observability from day one. Enabling faster, safer development by leveraging Generative AI for test generation, documentation, and repetitive coding tasks — while maintaining engineering hygiene. Mentoring a high-performing, globally distributed engineering team and contributing to code reviews, design sessions, and cross-team collaboration. Responsibilities Lead design and development of backend services in Go with concurrency, memory safety, and observability in mind. Manage service uptime and reliability across multi-region deployments via dashboards, tracing, and alerting. Maintain strict SLAs for mission-critical payment operations and support incident response during SLA violations. Profile and optimize Go services using tools like pprof, benchstat, and the Go race detector. Drive code quality through test-driven development, code reviews, and API-first workflows (OpenAPI / Swagger). Collaborate cross-functionally with Product, QA, DevOps, Compliance, and Business to ensure production-readiness. Maintain well-documented service boundaries and internal libraries for scalable engineering velocity. Encourage strategic use of Generative AI for API mocking, test data generation, schema validation, and static analysis. Advocate for clean architecture, technical debt remediation, and security best practices (e.g., rate limiting, mTLS, context timeouts). Requirements Atleast 7 years of engineering experience with deep expertise in Go (Golang). Expert-level understanding of concurrency, goroutines, channels, synchronization primitives, and distributed coordination patterns Strong grasp of profiling and debugging Go applications, memory management, and performance tuning. Proven experience in instrumenting production systems for SLAs/SLIs with tools like Prometheus, Grafana, or OpenTelemetry. Solid experience with PostgreSQL / MySQL, schema design for high-consistency systems, and transaction lifecycle in financial services. Experience building, documenting, and scaling RESTful APIs in an API-first platform environment. Comfort with cloud-native tooling, containerization, and DevOps workflows (CI/CD, blue-green deployment, rollback strategies). Demonstrated understanding of observability practices: structured logging, distributed tracing, and alerting workflows. Brownie Points Experience in payments, card issuance, or remittance infrastructure. Working knowledge of PHP (for legacy systems). Contributions to Go open-source projects or public technical content. Experience with GenAI development tools like AWS Q , CodeWhisperer in a team setting Track record of delivering high-quality services in regulated environments with audit, compliance, and security mandates. Engagement Model:: Direct placement with client This is remote role Shift timings: :10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Surat, Gujarat, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
2.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Job Title: Network Security Analyst – IT Audit & ISO 27001 Location: Coimbatore (Work from Office) Experience: 2+ years Availability: Immediate Joiners Preferred Job Description: We are seeking a motivated and skilled Network Security Analyst with experience in IT Audit and ISO 27001 implementation to join our team in Coimbatore. The ideal candidate will play a key role in assessing and strengthening our network security infrastructure while ensuring compliance with information security standards. Key Responsibilities: Perform regular network security assessments and vulnerability reviews Monitor and manage firewalls, IDS/IPS, VPNs, and endpoint security controls Conduct IT audits focusing on infrastructure, access control, and change management Assist in implementing and maintaining ISO 27001 standards , including risk assessments, controls mapping, and documentation Coordinate with internal teams to remediate audit findings and ensure continuous compliance Maintain and update security policies, procedures, and incident response plans Support security awareness initiatives and training programs Requirements: Minimum 2 years of experience in network security and IT audits Solid understanding of TCP/IP, network protocols, and security controls Working knowledge of ISO 27001 framework, including internal audits and documentation Experience with firewalls, IDS/IPS, antivirus, SIEM tools Strong analytical, communication, and documentation skills Preferred certifications: ISO 27001 LA , CEH , CompTIA Security+ Show more Show less
Posted 1 day ago
0 years
0 Lacs
Mahesana, Gujarat, India
On-site
We are hiring on behalf of our esteemed client, a well-established company in the Share Broking Industry. Job Summary: The Terminal Operator plays a critical role in supporting trading operations by managing and operating stock market trading terminals such as NSE NOW, BSE BOLT, ODIN, NEAT, Bloomberg, or Refinitiv . The role involves executing trades on behalf of clients or the firm, ensuring compliance with market regulations, and maintaining trading system integrity and uptime. The Terminal Operator acts as a bridge between dealers, clients, and back-office operations. Location: Mehsana, Gujarat Key Responsibilities: Operate equity and derivatives trading terminals (NSE, BSE, MCX, etc.). Execute trades accurately and swiftly on behalf of clients or dealers. Monitor market movements and terminal alerts in real time. Maintain client order books and ensure trade confirmations are sent. Ensure all trades are within regulatory and risk limits. Troubleshoot terminal issues and coordinate with software vendors or IT support. Assist in daily market opening/closing activities and system readiness checks. Coordinate with risk, compliance, and back-office teams for smooth trade settlements. Keep records of trades, margin reports, and audit trails as per SEBI and exchange regulations. Provide support to relationship managers and dealers for order routing, price discovery, and research tools. Maintain confidentiality and integrity of trading data and client information. Education: Bachelor’s degree in commerce, Finance, Business Administration, or related field. Job Type: Full-time Show more Show less
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2