Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
40 Lacs
Bhubaneswar, Odisha, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
7.0 years
40 Lacs
Cuttack, Odisha, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
7.0 years
40 Lacs
Guwahati, Assam, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
7.0 years
40 Lacs
Jamshedpur, Jharkhand, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
7.0 years
40 Lacs
Raipur, Chhattisgarh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
7.0 years
40 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
7.0 years
40 Lacs
Amritsar, Punjab, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
40.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description The threat to data has never been greater. Oracle Database helps reduce the risk of a data breach and simplifies regulatory compliance with security solutions for encryption and key management, granular access controls, flexible data masking, comprehensive activity monitoring, and sophisticated auditing capabilities. Oracle Data Safe is a cloud service built on top of OCI offering database security posture management for all your Oracle Databases, on-premises or in the cloud. Assess security, detect configuration and user drift, find and mask sensitive data, and collect audit data for analysis, alerting, and reporting. With the newly in-database SQL Firewall policy management, it provides intelligence to detect malicious activities and SQL injection attack. You will be leading a team of security dedicated professionals building a state-of-the-art security cloud solution to secure the sensitive data in mission critical databases. We will apply genAI technologies to solve challenging security problems. Career Level - M4 Responsibilities As a director of the software engineering division, you will apply your extensive knowledge of software architecture to manage software development tasks associated with developing, debugging or designing software applications, operating systems and databases according to provided design specifications. Build enhancements within an existing software architecture and envision future improvements to the architecture. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch . Develop Infrastructure as Code (IaC) using Terraform , and automate deployments with CI/CD pipelines . Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle —from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases . Implement frameworks for bias detection, explainability , and responsible AI . Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards , and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring , ing , and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills: 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services : EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git , and CI/CD pipelines . Proven track record in delivering AI/ML projects into production environments . Deep understanding of MLOps, model versioning, monitoring , and retraining pipelines . Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills: AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain , LLMs , Kubeflow , or GCP-based AI services . Skills Enterprise Architecture,Enterprise Architect,Aws,Python Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch . Develop Infrastructure as Code (IaC) using Terraform , and automate deployments with CI/CD pipelines . Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle —from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases . Implement frameworks for bias detection, explainability , and responsible AI . Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards , and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring , ing , and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills: 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services : EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git , and CI/CD pipelines . Proven track record in delivering AI/ML projects into production environments . Deep understanding of MLOps, model versioning, monitoring , and retraining pipelines . Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills: AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain , LLMs , Kubeflow , or GCP-based AI services . Skills Enterprise Architecture,Enterprise Architect,Aws,Python Show more Show less
Posted 1 month ago
6.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Description Job Summary We are seeking a Senior IAC Engineer to architect, develop, and automate D&A GCP PAAS services and Databricks platform provisioning using Terraform, Spacelift, and GitHub. This role combines the depth of platform engineering with the principles of reliability engineering, enabling resilient, secure, and scalable cloud environments. The ideal candidate has 6+ years of hands-on experience with IaC , CI/CD, infrastructure automation, and driving cloud infrastructure reliability. Key Responsibilities Infrastructure & Automation Design, implement, and manage modular, reusable Terraform modules to provision GCP resources (BigQuery, GCS, VPC, IAM, Pub/Sub, Composer, etc.). Automate provisioning of Databricks workspaces, clusters, jobs, service principals, and permissions using Terraform. Build and maintain CI/CD pipelines for infrastructure deployment and compliance using GitHub Actions and Spacelift. Standardize and enforce GitOps workflows for infrastructure changes, including code reviews and testing. Integrate infrastructure cost control, policy-as-code, and secrets management into automation pipelines. Architecture & Reliability Lead the design of scalable and highly reliable infrastructure patterns across GCP and Databricks. Implement resiliency and fault-tolerant designs, backup/recovery mechanisms, and automated alerting around infrastructure components. Partner with SRE and DevOps teams to enable observability, performance monitoring, and automated incident response tooling. Develop proactive monitoring and drift detection for Terraform-managed resources. Contribute to reliability reviews, runbooks, and disaster recovery strategies for cloud resources. Collaboration & Governance Work closely with security, networking, FinOps, and platform teams to ensure compliance, cost-efficiency, and best practices. Define Terraform standards, module registries, and access patterns for scalable infrastructure usage. Provide mentorship, peer code reviews, and knowledge sharing across engineering teams. Required Skills & Experience 6+ years of experience with Terraform and Infrastructure as Code (IaC), with deep expertise in GCP provisioning. Experience in automating Databricks (clusters, jobs, users, ACLs) using Terraform. Strong hands-on experience with Spacelift (or similar tools like Terraform Cloud or Atlantis) and GitHub CI/CD workflows. Deep understanding of infrastructure reliability principles: HA, fault tolerance, rollback strategies, and zero-downtime deployments. Familiar with monitoring/logging frameworks (Cloud Monitoring, Stackdriver, Datadog, etc.). Strong scripting and debugging skills to troubleshoot infrastructure or CI/CD failures. Proficient with GCP networking, IAM policies, folder/project structure, and Org Policy configuration. Nice to Have HashiCorp Certified: Terraform Associate or Architect. Familiarity with SRE principles (SLOs, error budgets, alerting). Exposure to FinOps strategies: cost controls, tagging policies, budget alerts. Experience with container orchestration (GKE/Kubernetes), Cloud Composer is a plus No Relocation support available Business Unit Summary At Mondelēz International, our purpose is to empower people to snack right by offering the right snack, for the right moment, made the right way. That means delivering a broad range of delicious, high-quality snacks that nourish life's moments, made with sustainable ingredients and packaging that consumers can feel good about. We have a rich portfolio of strong brands globally and locally including many household names such as Oreo , belVita and LU biscuits; Cadbury Dairy Milk , Milka and Toblerone chocolate; Sour Patch Kids candy and Trident gum. We are proud to hold the top position globally in biscuits, chocolate and candy and the second top position in gum. Our 80,000 makers and bakers are located in more than 80 countries and we sell our products in over 150 countries around the world. Our people are energized for growth and critical to us living our purpose and values. We are a diverse community that can make things happen—and happen fast. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Show more Show less
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Chennai
On-site
Designation: Senior Analyst – Data Science Level: L2 Experience: 4 to 6 years Location: Chennai Job Description: We are seeking an experienced MLOps Engineer with 4-6 years of experience to join our dynamic team. In this role, you will build and maintain robust machine learning infrastructure that enables our data science team to deploy and scale models for credit risk assessment, fraud detection, and revenue forecasting. The ideal candidate has extensive experience with MLOps tools, production deployment, and scaling ML systems in financial services environments. Responsibilities: Design, build, and maintain scalable ML infrastructure for deploying credit risk models, fraud detection systems, and revenue forecasting models to production Implement and manage ML pipelines using Metaflow for model development, training, validation, and deployment Develop CI/CD pipelines for machine learning models ensuring reliable and automated deployment processes Monitor model performance in production and implement automated retraining and rollback mechanisms Collaborate with data scientists to productionize models and optimize them for performance and scalability Implement model versioning, experiment tracking, and metadata management systems Build monitoring and alerting systems for model drift, data quality, and system performance Manage containerization and orchestration of ML workloads using Docker and Kubernetes Optimize model serving infrastructure for low-latency predictions and high throughput Ensure compliance with financial regulations and implement proper model governance frameworks Skills: 4-6 years of professional experience in MLOps, DevOps, or ML engineering, preferably in fintech or financial services Strong expertise in deploying and scaling machine learning models in production environments Extensive experience with Metaflow for ML pipeline orchestration and workflow management Advanced proficiency with Git and version control systems, including branching strategies and collaborative workflows Experience with containerization technologies (Docker) and orchestration platforms (Kubernetes) Strong programming skills in Python with experience in ML libraries (pandas, numpy, scikit-learn) Experience with CI/CD tools and practices for ML workflows Knowledge of distributed computing and cloud-based ML infrastructure Understanding of model monitoring, A/B testing, and feature store management. Additional Skillsets: Experience with Hex or similar data analytics platforms Knowledge of credit risk modeling, fraud detection, or revenue forecasting systems Experience with real-time model serving and streaming data processing Familiarity with MLFlow, Kubeflow, or other ML lifecycle management tools Understanding of financial regulations and model governance requirements Job Snapshot Updated Date 13-06-2025 Job ID J_3745 Location Chennai, Tamil Nadu, India Experience 4 - 6 Years Employee Type Permanent
Posted 1 month ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
West Bengal, India
On-site
Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Skills: machine learning,flask,python,scikit-learn,tensorflow,pytorch,azure,sql,pandas,models,docker,mlflow,data analysis,aws,kubernetes,ml,artificial intelligence,fastapi,gcp,numpy Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Karnataka, India
On-site
Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Skills: machine learning,flask,python,scikit-learn,tensorflow,pytorch,azure,sql,pandas,models,docker,mlflow,data analysis,aws,kubernetes,ml,artificial intelligence,fastapi,gcp,numpy Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Delhi, India
On-site
Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Skills: machine learning,flask,python,scikit-learn,tensorflow,pytorch,azure,sql,pandas,models,docker,mlflow,data analysis,aws,kubernetes,ml,artificial intelligence,fastapi,gcp,numpy Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Maharashtra, India
On-site
Hiring for top Unicorns & Soonicorns of India! We’re looking for a Machine Learning Engineer who thrives at the intersection of data, technology, and impact. You’ll be part of a fast-paced team that leverages ML/AI to personalize learning journeys, optimize admissions, and drive better student outcomes. This role is ideal for someone who enjoys building scalable models and deploying them in production to solve real-world problems. What You’ll Do Build and deploy ML models to power intelligent features across the Masai platform — from admissions intelligence to student performance prediction. Collaborate with product, engineering, and data teams to identify opportunities for ML-driven improvements. Clean, process, and analyze large-scale datasets to derive insights and train models. Design A/B tests and evaluate model performance using robust statistical methods. Continuously iterate on models based on feedback, model drift, and changing business needs. Maintain and scale the ML infrastructure to ensure smooth production deployments and monitoring. What We’re Looking For 2–4 years of experience as a Machine Learning Engineer or Data Scientist. Strong grasp of supervised, unsupervised, and deep learning techniques. Proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, etc.). Experience with data wrangling tools like Pandas, NumPy, and SQL. Familiarity with model deployment tools like Flask, FastAPI, or MLflow. Experience working with cloud platforms (AWS/GCP/Azure) and containerization (Docker/Kubernetes) is a plus. Ability to translate business problems into machine learning problems and communicate solutions clearly. Bonus If You Have Experience working in EdTech or with personalized learning systems. Prior exposure to NLP, recommendation systems, or predictive modeling in a consumer-facing product. Contributions to open-source ML projects or publications in the space. Skills: machine learning,flask,python,scikit-learn,tensorflow,pytorch,azure,sql,pandas,models,docker,mlflow,data analysis,aws,kubernetes,ml,artificial intelligence,fastapi,gcp,numpy Show more Show less
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Responsibilities Testing of various vendors metering related software (BCS & CMRI) to be in line with business requirement Providing support to internal stakeholder (i.e vigilance, I&C, LTP2) on software updating. Evaluation and testing of new electricity meters onsite at vendors location and / or in laboratory. Identifying and reinstating IMTE instruments for proper functioning their by bring down overall repairing cost of instruments. Coordination in implementation of AMR based billing reading of HT, LTP2, EA & streetlight meters, their by reducing billing time, increasing billing efficiency & early/timely realization of revenue. Calculation of drift of reference instruments by monitoring calibration trend & history thus monitoring operation of instruments within specified limit. Calculating uncertainty of accredited scope. Performing mandatory quality assurance program thus assurance of trust in generated laboratory results reliability. Developing innovative ideas in given context of various stakeholders to enhance efficiency and brand image. Qualifications Qualification: BE / Btech – Electrical / Electronics Experience - 3 - 5 years Show more Show less
Posted 1 month ago
6.0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Title: Flutter Developer Experience: 6+ Years Location: Kochi & Pollachi Notice Period: Immediate to Maximum 10 Days Key Responsibilities: Design, develop, and maintain high-quality mobile applications using Flutter. Translate functional requirements into responsive and efficient solutions. Follow Android recommended design principles, interface guidelines, and coding best practices. Deploy and manage applications on the Google Play Store, Apple App Store, and enterprise app stores. Collaborate with backend developers to define, integrate, and ship new features. Ensure optimal application performance, quality, and responsiveness. Take end-to-end ownership of the mobile solution lifecycle, from design through deployment and support. Technical Skills & Requirements: Deep expertise in Flutter and Dart programming language. Proficiency with Flutter state management solutions like Provider , Riverpod , and Flutter Hooks . Experience working with Flutter tools and libraries such as: Freezed, Dio, Shared Preferences, GoRouter, Drift (Moor) Strong understanding of RESTful APIs and integration of third-party SDKs/APIs . Experience with version control systems such as Git . Strong problem-solving abilities and keen attention to detail. Ability to work independently and deliver results in a fast-paced environment. Show more Show less
Posted 1 month ago
12.0 years
0 Lacs
Goa, India
On-site
If you are wondering what makes TRIMEDX different, it's that all of our associates share in a common purpose of serving clients, patients, communities, and each other with equal measures of care and performance. Everyone is focused on serving the customer and we do that by collaborating and supporting each other Associates look forward to coming to work each day Every associate matters and makes a difference It is truly a culture like no other — We hope you will join our team! Find out more about our company and culture here. TRIMEDX is an industry-leading, independent clinical asset management company delivering comprehensive clinical engineering services, clinical asset informatics, and medical device cybersecurity solutions to many of the largest health systems in the US. We help healthcare providers transform their clinical assets into strategic tools, driving reductions in operational expenses, optimizing clinical asset capital purchasing decisions and usage, improving caregiver satisfaction and productivity, maximizing resources for patient care, and delivering improved patient safety & protection. Health systems in the US spend, on average, 30% of their annual capital budget dollars on clinical assets, representing more than $200 billion of annual US sales, forecasted to grow at a CAGR of more than 5% in the next decade. Industry data solutions to assist providers in rationalizing their clinical asset utilization and purchasing decisions are limited, forcing providers to rely heavily on equipment manufacturers for advice and insight. TRIMEDX was built by providers, for providers, and leverages a history of expert clinical engineering with data on 90-95% of in-use medical equipment in the United States and an industry-leading data set of more than 6 million medical device records. A recent study by Fortune states that global healthcare asset management market is estimated to be $215B by 2032 with a CAGR of 25.3%. The United States is the largest single market for medical devices and accounts for about 40 percent of worldwide sales (BMI Research 2015). We are looking for the Chief Data Scientist who can accelerate our growth as a thought leader in Data Science, Research, and application of models to solve complex business problems. This person will lead the Company’s efforts to create a data science practice dedicated to harnessing this proprietary data set to support commercialization of novel new market solutions to enable providers to make informed decisions regarding their clinical asset investments and utilization. This role will be one of the most prominent voices in the global medical device industry. As the Chief Data Scientist, you will be responsible for leading TRIMEDX data and AI architecture and design to help our customers increase their clinical asset utilization and reduce cost to operate. You will design our enterprise-wide AI and Machine Learning initiatives. This new role will be instrumental in shaping and implementing our AI roadmap, driving innovation through advanced data modeling, and applying automation to optimize operations across diverse data ecosystems. It involves orchestrating Agentic AI across multiple SaaS infrastructures, including, but not limited to, Snowflake, Azure, ServiceNow, and Looker. In this role, you will work with senior executives and customers to arrive at solutions that significantly lift their business and accelerate the growth. Key Responsibilities You are a strategic leader that has defined enterprise AI/ML platforms with a strong focus on LLMs, generative AI, and predictive modeling. You will implement model-driven features from initial concept to production, covering all stages including model creation, evaluation, performance metrics, A/B testing, drift monitoring and self-correction with feedback loops. You will coach and train talented engineers in their career growth and set an example for data and AI organization. Your experience is building pricing models, forecasting, smart work assignment are preferred. You stay at the forefront of AI/ML research and emerging technologies; evaluate and integrate cutting-edge tools and frameworks. You will foster a culture of experimentation and continuous learning within the data science team. You will create high levels of engagement across teams in partnership with other key leaders within broader teams. Skills And Experience Minimum of 12 years of experience in computer science, data science, Statistics or related field. Experience preference in machine learning, data analytics or related disciplines with a focus on algorithmic product development. Deep expertise in statistics, econometrics, predictive analytics, and related disciplines. Working experience with modeling tools and languages such as Python, PyTorch, JAX, Tensorflow, SQL, and experience deploying production models on clouds platforms (AWS, Azure, GCP). Must have demonstrated experience leading data driven initiatives. Experience leading people is preferred. Demonstrated experience with multi-modal LLMs. Experience with A/B testing and cocreate MLOps practice for systemic application and maintenance of the models. Hands on experience deploying high impact, high Throughput, highly scalable, multi-modal ML models in Azure. Experience in data analysis using different types of datasets with statistics and predictive modeling foundations, including PC and foundation models. Experience creating patents and publications and / or speaking at top conferences such as CVPR (Computer Vision and Pattern Recognition Conference), IEEE (The Institute of Electrical and Electronics Engineers), SIGKDD (Special Interest Group on Knowledge Discovery and Data Mining), AAAI (Association for the Advancement of Artificial Intelligence), NeurIPS. Successful and proven experience to collaborate and deliver results in a fast paced, multifaceted, matrix environment. Proven success at working with abstract ideas and solving complex problems while driving collaboration across various teams. Ability to be hands-on when necessary as well as strategic is required. Excellent public speaking and presentation skills; strong written and verbal communication skills. Ability to travel up to 50%. Education And Qualifications Bachelor’s degree in Statistics, Economics or related field is required, or equivalent experience. Master’s degree or Ph.D. in Statistics or Economics is highly preferred. At TRIMEDX, we support and protect a culture where diversity, equity and inclusion are the foundation. We know it is our uniqueness and experiences that make a difference, drive innovation and create shared success. We create an inclusive workplace by actively seeking diversity, creating inclusion and driving equity and engagement. We embrace people’s differences which include age, race, color, ethnicity, gender, gender identity, sexual orientation, national origin, education, genetics, veteran status, disability, religion, beliefs, opinions and life experiences. Visit our website to view our full Diversity, Equity and Inclusion statement, along with our social channels to see what our team is up to: Facebook , LinkedIn , Twitter . TRIMEDX is an Equal Opportunity Employer. Drug-Free Workplace. Because we are committed to providing a safe and productive work environment, TRIMEDX is a drug-free workplace. Accordingly, Associates are prohibited from engaging in the unlawful manufacture, sale, distribution, dispensation, possession, or use of any controlled substance or marijuana, or otherwise being under the influence thereof, on all TRIMEDX and Customer property or during working/on-call hours. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description: Machine Learning Engineer - US Healthcare Claims Management Position: MLOps Engineer - US Healthcare Claims Management Location: Gurgaon, Delhi NCR, India Company: Neolytix About the Role: We are seeking an experienced MLOps Engineer to build, deploy, and maintain AI/ML systems for our healthcare Revenue Cycle Management (RCM) platform. This role will focus on operationalizing machine learning models that analyze claims, prioritize denials, and optimize revenue recovery through automated resolution pathways. Key Tech Stack: Models & ML Components: Fine-tuned healthcare LLMs (GPT-4, Claude) for complex claim analysis Knowledge of Supervised/Unsupervised Models, Optimization & Simulation techniques Domain-specific SLMs for denial code classification and prediction Vector embedding models for similar claim identification NER models for extracting critical claim information Seq2seq models (automated appeal letter generation) Languages & Frameworks: Strong proficiency in Python with OOP principles - Experience developing APIs using Flask or Fast API frameworks – Integration knowledge with front-end applications – Expertise in version control systems (e.g., GitHub, GitLab, Azure DevOps) - Proficiency in databases, including SQL, NoSQL and vector databases – Experience with Azure Libraries: PyTorch/TensorFlow/Hugging Face Transformers Key Responsibilities: ML Pipeline Architecture: Design and implement end-to-end ML pipelines for claims processing, incorporating automated training, testing, and deployment workflows Model Deployment & Scaling: Deploy and orchestrate LLMs and SLMs in production using containerization (Docker/Kubernetes) and Azure cloud services Monitoring & Observability: Implement comprehensive monitoring systems to track model performance, drift detection, and operational health metrics CI/CD for ML Systems: Establish CI/CD pipelines specifically for ML model training, validation, and deployment Data Pipeline Engineering: Create robust data preprocessing pipelines for healthcare claims data, ensuring compliance with HIPAA standards Model Optimization: Tune and optimize models for both performance and cost-efficiency in production environments Infrastructure as Code: Implement IaC practices for reproducible ML environments and deployments Document technical solutions & create best practices for scalable AI-driven claims management What Sets You Apart: Experience operationalizing LLMs for domain-specific enterprise applications Background in healthcare technology or revenue cycle operations Track record of improving model performance metrics in production systems What We Offer: Competitive salary and benefits package Opportunity to contribute to innovative AI solutions in the healthcare industry Dynamic and collaborative work environment Opportunities for continuous learning and professional growth To Apply: Submit your resume and a cover letter detailing your relevant experience and interest in the role to vidya@neolytix.com Powered by JazzHR aAuhzbQDU5 Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Overview Cvent’s Global Demand Center seeks a dynamic and experienced Assistant Team Lead for our Marketing Technology team. This role is pivotal in optimizing our “Go-to-Market” technology, account-based marketing, and personalization efforts. The successful candidate will specialize in advanced marketing technologies, ensuring alignment with business goals and enhancing the experience for prospects and customers through innovative solutions. At Cvent, you'll be part of a dynamic team that values innovation and creativity. You'll work with cutting- edge technology and help drive our go-to-market efforts to new heights. We want to hear from you if you are passionate about marketing technology and have a track record of driving success through innovative solutions. In This Role, You Will Manage our Go-to-Market Tech Stack: Elevate our “Go To Market” technology stack, including revenue marketing tech, ABM, and personalization tools. Implement and manage advanced marketing technologies such as 6sense and chat solutions. Own the technical implementation and ongoing management of new Go-To-Market tools. Integration and Implementation: Lead the charge in overseeing technical integrations across various marketing and sales platforms. Transform the chat experience for prospects and customers by ensuring seamless integration of chat solutions with other marketing tools. Optimize sales-facing systems like Reachdesk to align with business goals. Campaign Attribution and Reporting: Support and enhance campaign attribution strategies for better tracking and analysis. Develop and manage comprehensive reporting frameworks to measure the effectiveness of technology-driven marketing efforts. Create and maintain ABM dashboards, providing clear visibility into performance metrics. Performance Analysis and Improvement: Analyze chatbot performance and make data-driven improvements to enhance customer engagement. Lead efforts to improve the functionality and effectiveness of our marketing and sales enablement technologies. Leverage data-driven insights to inform decision-making and drive continuous improvement. Training and Support: Deliver impactful training on go-to-market tools and processes, ensuring the marketing team fully utilizes the capabilities of our tools. Support campaign attribution and reporting strategies, providing accurate and actionable data to stakeholders for informed decisions. Technical Expertise and Leadership: Serve as a technical expert, onboarding new technologies and optimizing the use of existing tools in our marketing technology stack. Guide the team in harnessing the full potential of our tech resources. Gap Identification and Requirement Development: Identify gaps and develop requirements for the automation of manual tasks to enhance marketing efficiency and effectiveness. Innovate solutions to streamline processes and drive productivity. Evaluation of New Technologies: Evaluate new marketing technologies, ensuring alignment with business objectives and staying ahead of industry trends. Here's What You Need Bachelor’s/Master’s degree in Marketing, Business, or a related field. Exceptional project management skills, including attention to detail, stakeholder engagement, project plan development, and deadline management with diverse teams. Deep experience with go-to-market tools like: ABM - 6sense, DemandBase Chat - Drift, Qualified, Avaamo Gifting - Reachdesk, Sendoso AI - ChatGPT, Microsoft Azure, Claude, Google Gemini, Glean, etc. Web - CHEQ, OneTrust iPaaS - Zapier, Tray.io, Informatica Skilled in crafting technical documentation and simplifying complex procedures. A minimum of 5 years of hands-on technical experience with marketing technologies like marketing automation platforms, CRM and database platforms Strong capacity for understanding and fulfilling project requirements and expectations. Excellent communication and collaboration skills, with a strong command of the English language. Self-motivated, analytical, eager to learn, and able to thrive in a team environment. Show more Show less
Posted 1 month ago
12.0 - 18.0 years
0 Lacs
Tamil Nadu, India
Remote
Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. This position requires expertise in designing, developing, debugging, and maintaining AI-powered applications and data engineering workflows for both local and cloud environments. The role involves working on large-scale projects, optimizing AI/ML pipelines, and ensuring scalable data infrastructure. As a PMTS, you will be responsible for integrating Generative AI (GenAI) capabilities, building data pipelines for AI model training, and deploying scalable AI-powered microservices. You will collaborate with AI/ML, Data Engineering, DevOps, and Product teams to deliver impactful solutions that enhance our products and services. Additionally, it would be desirable if the candidate has experience in retrieval-augmented generation (RAG), fine-tuning pre-trained LLMs, AI model evaluation, data pipeline automation, and optimizing cloud-based AI deployments. Responsibilities AI-Powered Software Development & API Integration Develop AI-driven applications, microservices, and automation workflows using FastAPI, Flask, or Django, ensuring cloud-native deployment and performance optimization. Integrate OpenAI APIs (GPT models, Embeddings, Function Calling) and Retrieval-Augmented Generation (RAG) techniques to enhance AI-powered document retrieval, classification, and decision-making. Data Engineering & AI Model Performance Optimization Design, build, and optimize scalable data pipelines for AI/ML workflows using Pandas, PySpark, and Dask, integrating data sources such as Kafka, AWS S3, Azure Data Lake, and Snowflake. Enhance AI model inference efficiency by implementing vector retrieval using FAISS, Pinecone, or ChromaDB, and optimize API latency with tuning techniques (temperature, top-k sampling, max tokens settings). Microservices, APIs & Security Develop scalable RESTful APIs for AI models and data services, ensuring integration with internal and external systems while securing API endpoints using OAuth, JWT, and API Key Authentication. Implement AI-powered logging, observability, and monitoring to track data pipelines, model drift, and inference accuracy, ensuring compliance with AI governance and security best practices. AI & Data Engineering Collaboration Work with AI/ML, Data Engineering, and DevOps teams to optimize AI model deployments, data pipelines, and real-time/batch processing for AI-driven solutions. Engage in Agile ceremonies, backlog refinement, and collaborative problem-solving to scale AI-powered workflows in areas like fraud detection, claims processing, and intelligent automation. Cross-Functional Coordination and Communication Collaborate with Product, UX, and Compliance teams to align AI-powered features with user needs, security policies, and regulatory frameworks (HIPAA, GDPR, SOC2). Ensure seamless integration of structured and unstructured data sources (SQL, NoSQL, vector databases) to improve AI model accuracy and retrieval efficiency. Mentorship & Knowledge Sharing Mentor junior engineers on AI model integration, API development, and scalable data engineering best practices, and conduct knowledge-sharing sessions. Education & Experience Required 12-18 years of experience in software engineering or AI/ML development, preferably in AI-driven solutions. Hands-on experience with Agile development, SDLC, CI/CD pipelines, and AI model deployment lifecycles. Bachelor’s Degree or equivalent in Computer Science, Engineering, Data Science, or a related field. Proficiency in full-stack development with expertise in Python (preferred for AI), Java Experience with structured & unstructured data: SQL (PostgreSQL, MySQL, SQL Server) NoSQL (OpenSearch, Redis, Elasticsearch) Vector Databases (FAISS, Pinecone, ChromaDB) Cloud & AI Infrastructure AWS: Lambda, SageMaker, ECS, S3 Azure: Azure OpenAI, ML Studio GenAI Frameworks & Tools: OpenAI API, Hugging Face Transformers, LangChain, LlamaIndex, AutoGPT, CrewAI. Experience in LLM deployment, retrieval-augmented generation (RAG), and AI search optimization. Proficiency in AI model evaluation (BLEU, ROUGE, BERT Score, cosine similarity) and responsible AI deployment. Strong problem-solving skills, AI ethics awareness, and the ability to collaborate across AI, DevOps, and data engineering teams. Curiosity and eagerness to explore new AI models, tools, and best practices for scalable GenAI adoption. About Athenahealth Here’s our vision: To create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. What’s unique about our locations? From an historic, 19th century arsenal to a converted, landmark power plant, all of athenahealth’s offices were carefully chosen to represent our innovative spirit and promote the most positive and productive work environment for our teams. Our 10 offices across the United States and India — plus numerous remote employees — all work to modernize the healthcare experience, together. Our Company Culture Might Be Our Best Feature. We don't take ourselves too seriously. But our work? That’s another story. athenahealth develops and implements products and services that support US healthcare: It’s our chance to create healthier futures for ourselves, for our family and friends, for everyone. Our vibrant and talented employees — or athenistas, as we call ourselves — spark the innovation and passion needed to accomplish our goal. We continue to expand our workforce with amazing people who bring diverse backgrounds, experiences, and perspectives at every level, and foster an environment where every athenista feels comfortable bringing their best selves to work. Our size makes a difference, too: We are small enough that your individual contributions will stand out — but large enough to grow your career with our resources and established business stability. Giving back is integral to our culture. Our athenaGives platform strives to support food security, expand access to high-quality healthcare for all, and support STEM education to develop providers and technologists who will provide access to high-quality healthcare for all in the future. As part of the evolution of athenahealth’s Corporate Social Responsibility (CSR) program, we’ve selected nonprofit partners that align with our purpose and let us foster long-term partnerships for charitable giving, employee volunteerism, insight sharing, collaboration, and cross-team engagement. What can we do for you? Along with health and financial benefits, athenistas enjoy perks specific to each location, including commuter support, employee assistance programs, tuition assistance, employee resource groups, and collaborative workspaces — some offices even welcome dogs. In addition to our traditional benefits and perks, we sponsor events throughout the year, including book clubs, external speakers, and hackathons. And we provide athenistas with a company culture based on learning, the support of an engaged team, and an inclusive environment where all employees are valued. We also encourage a better work-life balance for athenistas with our flexibility. While we know in-office collaboration is critical to our vision, we recognize that not all work needs to be done within an office environment, full-time. With consistent communication and digital collaboration tools, athenahealth enables employees to find a balance that feels fulfilling and productive for each individual situation. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role : MLOps Engineer Location - Chennai - CKC Mode of Interview - In Person Data - 7th June 2025 (Saturday) Key Words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required Experience And Qualifications Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage. Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France