Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 years
40 Lacs
Vijayawada, Andhra Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Mysore, Karnataka, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
6.0 years
60 - 65 Lacs
Patna, Bihar, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Patna, Bihar, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
6.0 years
60 - 65 Lacs
Mysore, Karnataka, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
6.0 years
60 - 65 Lacs
Vijayawada, Andhra Pradesh, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Test Lead – Computer Vision. (Ground Truth Validation) Location : Onsite in Bavdhan, Pune Description: We are looking for an experienced Test Lead to manage QA efforts for our computer vision projects , with a strong focus on Ground Truth validation . The ideal candidate will be hands-on, detail-oriented, and able to lead testing for ML model outputs, annotated datasets, and visual recognition systems. Responsibilities: Lead test planning, execution, and reporting for CV/ML pipelines Validate annotations and Ground Truth data for accuracy and consistency Coordinate with data labeling teams and ML engineers Develop and manage test cases for model accuracy, precision, and edge cases Ensure delivery of high-quality datasets and outputs Requirements: 4+ years in QA/testing, with 1–2 years in Computer Vision or ML projects Experience with Ground Truth validation workflows Familiarity with image annotation tools (e.g., Labelbox, CVAT, VGG) Strong attention to detail and analytical thinking Excellent communication and leadership skills Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Sr. Business Analyst@ SARJAK Container Lines https://sarjak.com/ About us: Sarjak Container Lines Private Limited, founded in 2003 in Mumbai, is a leading Indian NVOCC specializing in over-dimensional (ODC) and out-of-gauge (OOG) cargo logistics. What began with just five flat racks has grown into a powerful fleet of over 9,000 specialized containers—flat racks, open tops, super racks, and more—enabling efficient, secure project cargo movement across global trade routes. With a team of 300+ professionals—“SARJAKites”—and a presence across 84 countries and 275+ ports, the company leverages cutting-edge technology and a customer-centric approach to offer tailor-made end-to-end logistics solutions, including breakbulk, heavy lift, and chartering services. Their ethos—captured in the motto "Not just project cargo, we move economies" —emphasizes innovation, reliability, and sustainability, supported by accolades such as the MALA NVOCC of the Year awards and RINA ACEP certification. In essence, Sarjak is more than a logistics provider—it’s a global powerhouse pioneering special-equipment shipping to redefine project logistics. The group is self-sufficient in every aspect including an in-house IT powerhouse. We are looking for right fits for our customized product lines helping our business scale sustainably with speed. Requirements: 5+ Years of Experience in Industry as a Business Analyst preferably understanding the techno functional aspect of product development. Preferably someone with B.Tech/BE in CSE/IT/ECE , B.Sc./M.Sc. in computers or related field or having 5+ Years of proven experience in Business Analysis Someone who has strong Logical thinking. Understanding of core business processes. Should have overall SDLC knowledge i.e. “How stuffs work?” Well versed with the creation of SRS/ FRD, BRD, Work flows, Change requests Have an acumen to get the things done from a team assigned to them. Someone who is good in managing a small team and owning the processes. Preferably, someone who has understanding of UAT and can play a pivotal role during the last leg of implementation. A proven skill of negotiating feature lists and prioritizing the features as per the timeline of phase. Someone, who wants to work in a digital start-up and want to make it big. If you are one who like to take challenges and learn & evolve. Understanding of Supply chain management, logistics, containers & maritime industry is good to have. Well versed with SQL and can write medium-to-high complexity queries. Practical experience of AI/ML project implementation. Experience of Financial accounting application including AR/AP and Book-keeping would be a plus. Job description: Work with the Tech Head and CEO to understand the high-level requirement, research on subject matter, elicitate the requirement and make it worth implementation in current product. Analyze the requirement, discuss solution with tech head and create mock-ups, prototype and workflows. Be very clear on step-by-step solution of a complex problem. Document the requirements to be easily understood by team of developers Conduct the requirement planning session and accommodate the feedbacks. Be the link between tech and functional team. Manage a small team of developers and QA. Participate in user trainings Create user manuals of feature releases Maintain the tasks and boards in Azure Devops Conduct a stand-up meeting of 2-4 people and manage their task statuses. Perform UAT and take sign-offs Monitor the quality assurance. Implement the AI/ML project including the AI agents, predictive analysis on custom models. Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Pune/Pimpri-Chinchwad Area
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Company background We at Neysa believe that a good software experience is one where you don’t have to (necessarily) read the complete manual. Good software is intuitive, inviting and accommodating. Most importantly, good software should make life easy. Neysa was founded by people who believe a software experience should not need constant interaction; it should work while you do something or nothing. In today’s hyper-connected and online world, we are looking at a few good people who believe that there is a life outside of screens and are not afraid of saying that! About the role As an Enterprise Sales Specialist, you will be responsible for positioning Neysa’s on-demand AI acceleration cloud solutions within large enterprise organisations. You will be the Subject Matter Expert in the areas of AI & Generative AI and have the expertise in AI use cases, outcomes and technology stack (HW, SW) as well as differentiation from Hyper scalers and traditional competitors. You will engage with senior executives and technical decision-makers to understand their needs, craft tailored solutions, and close high-value, complex deals. This role requires deep technical understanding, exceptional communication skills, and a proven track record of selling complex IT infrastructure solutions. *OPEN TO : Mumbai, Delhi, Bangalore, & Hyderabad Key Responsibilities Customer Requirement Alignment : Engage with customers to not only understand their unique needs and expectations, but most importantly offer them the most intuitive and resilient experience in setting up their ML/AI workload and solutions that align with their business objectives. Solution Development : Develop and present customized solutions tools, environment and robust on-demand GPU infrastructure to create and deploy AI use-cases faster than ever. Position our solutions with all AI personas: technical/SW personas (AI/ML Engineer, MLOps , Data Scientist), business decision-makers (CEO, CTO) and IT decision-makers (CIO, VP Infrastructure). Sales Strategy Execution : Implement sales strategies that effectively communicate the value of our customised AI infrastructure and platform solutions, emphasizing on-demand access to scale capacity. Client Engagement : Foster relationships with key stakeholders within enterprise accounts to facilitate trust and long-term partnerships. Market Analysis : Analyze market trends and customer feedback to refine product offerings and sales tactics. Collaboration with Technical Teams : Work closely with technical teams to ensure that the proposed solutions are feasible and meet the required specifications. Qualifications Experience : Proven experience in enterprise sales, particularly in technology aspects of hybrid cloud, AI hardware and software. Strong understanding of cloud computing concepts, virtualization technologies, and infrastructure management Sales Skills : Demonstrated ability to close complex deals involving multiple decision-makers. Communication Skills : Excellent interpersonal skills to effectively engage with clients and present solutions. Skills Proficiency in CRM software for managing customer interactions and tracking sales performance. Ability to create compelling presentations that articulate the benefits of proposed solutions. Strong negotiation skills to secure favorable terms for both the company and the client. Preferred Qualifications Bachelor’s degree in business, computer science, or a related field. Experience working with enterprise-level clients in sectors requiring high-performance computing solutions. Familiarity with emerging technologies in AI, virtualization services and infrastructure management. This role is critical for driving growth within the enterprise segment by ensuring that customer requirements are met through innovative solutioning in a rapidly evolving technological landscape. Show more Show less
Posted 2 days ago
5.0 - 7.0 years
8 - 12 Lacs
Pune
Hybrid
So, what’s the role all about? We are looking for a highly skilled and motivated Senior Develper to join our team, with strong expertise in Python , and deep experience in building intelligent agentic systems using AWS Bedrock Agents and AWS Q Workflows . This role focuses on building end-to-end agentic task assistance solutions that execute complex workflows and enable seamless orchestration across systems. You will play a key role in creating smart automation that bridges front office interactions (customer-facing systems) with mid and back office operations (e.g., finance, fulfillment, compliance), empowering enterprise-grade digital transformation. How will you make an impact? Design, develop, and maintain scalable full-stack applications using Python . Build intelligent task agents leveraging AWS Bedrock Agents to manage and automate multi-step workflows. Integrate and orchestrate AWS Q Workflows to handle complex, enterprise-level task execution and decision-making processes. Enable contextual task handoff between front office and mid/back office systems, ensuring smooth operational continuity. Collaborate closely with cross-functional teams including product, DevOps, and AI/ML engineers to deliver secure, efficient, and intelligent systems. Write clean, maintainable code and contribute to architecture and design decisions for highly available agentic systems. Monitor, debug, and optimize live systems and workflows to ensure robust performance at scale. Have you got what it takes? 6+ years of full-stack development experience with strong hands-on skills in Python . Proven expertise in designing and deploying intelligent agents using AWS Bedrock Agents . Solid experience with AWS Q Workflows , including building and managing complex, automated workflow orchestration. Demonstrated ability to integrate AI-powered agents with enterprise systems and back-office applications. Experience building microservices and RESTful APIs within an AWS cloud-native architecture. Understanding of enterprise operations and workflow handoffs between business layers (front, mid, and back office). Familiarity with DevOps practices, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform or CloudFormation). Strong problem-solving skills, system thinking, and attention to detail. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager, Engineering, CX Role Type: Individual Contributor
Posted 2 days ago
6.0 years
60 - 65 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
6.0 years
60 - 65 Lacs
Gurugram, Haryana, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Ghaziabad, Uttar Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
6.0 years
60 - 65 Lacs
Noida, Uttar Pradesh, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
3.0 - 8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Company: Indian / Global Engineering & Manufacturing Organization Key Skills: ETL/ELT, RDF, OWL, SPARQL, Neo4j, AWS Neptune, ArangoDB, Python, SQL, Cypher, Semantic Modeling, Cloud Data Pipelines, Data Quality, Knowledge Graph, Graph Query Optimization, Semantic Search. Roles and Responsibilities: Design and build advanced data pipelines for integrating structured and unstructured data into graph models. Develop and maintain semantic models using RDF, OWL, and SPARQL. Implement and optimize data pipelines on cloud platforms such as AWS, Azure, or GCP. Model real-world relationships through ontologies and hierarchical graph data structures. Work with graph databases such as Neo4j, AWS Neptune, ArangoDB for knowledge graph development. Collaborate with cross-functional teams including AI/ML and business analysts to support semantic search and analytics. Ensure data quality, security, and compliance throughout the pipeline lifecycle. Monitor, debug, and enhance performance of graph queries and data transformation workflows. Create clear documentation and communicate technical concepts to non-technical stakeholders. Participate in global team meetings and knowledge-sharing sessions to align on data standards and architectural practices. Experience Requirement: 3-8 years of hands-on experience in ETL/ELT engineering and data integration. Experience working with graph databases such as Neo4j, AWS Neptune, or ArangoDB. Proven experience implementing knowledge graphs, including semantic modeling using RDF, OWL, and SPARQL. Strong Python and SQL programming skills, with proficiency in Cypher or other graph query languages. Experience designing and deploying pipelines on cloud platforms (AWS preferred). Track record of resolving complex data quality issues and optimizing pipeline performance. Previous collaboration with data scientists and product teams to implement graph-based analytics or semantic search features. Education: Any Graduation. Show more Show less
Posted 2 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About TrueFan AI TrueFan AI is India’s leading AI-led platform at the intersection of celebrities, brands, and consumers. For over 4 years, we have been pioneering generative AI - creating 6.4+ million minutes of content in 175+ languages. We are the only AI platform to partner with 52+ blue-chip brands (Hero MotoCorp, Bajaj Finserv, Zomato, Cipla, ICICI Prudential, Adani Wilmar, Hamleys, Dainik Bhaskar, and more) and 100+ A-list celebrities including Aamir Khan, Kareena Kapoor, and Ranveer Singh. We were awarded the ‘Gen AI/ML Disruptor of the Year’ at the Amazon AWS Conclave 2025. Location: Gurugram About the Role: We’re looking for a driven and enthusiastic Sales Development Representative (SDR) to join our growing sales team. As an SDR, you’ll be the first point of contact for potential customers. Your main responsibility is to generate qualified leads through outbound prospecting and help fill the pipeline for our Account Executives. Key Responsibilities: Proactively reach out to potential customers via cold calls, emails, and social media Qualify outbound leads to identify potential sales opportunities Set up discovery calls or meetings between qualified prospects and Account Executives Maintain accurate records of all prospecting activities in the CRM (e.g. Salesforce, HubSpot) Collaborate with marketing and sales teams to refine outreach strategies Stay up-to-date with industry trends and product knowledge Meet or exceed monthly quotas for qualified leads and meetings booked Qualifications: Bachelor’s degree in Business, Marketing, Communications, or related field (or equivalent experience) 2+ year of experience in a sales or customer-facing role Strong communication and interpersonal skills Highly motivated, self-starter attitude with a results-driven mindset Comfortable with high-volume outreach and rejection Familiarity with CRM software and sales prospecting tools (Salesforce, Outreach, LinkedIn Sales Navigator, etc.) Preferred: Experience in B2B sales or SaaS industry Understanding of the sales cycle and pipeline management Previous use of sales enablement tools like Apollo, ZoomInfo, or Gong Show more Show less
Posted 2 days ago
0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
On-site
We’re excited to invite you to be part of the Summer AI Internship 2025 by Mirai School of Technology – a 6-week hybrid program designed to future-proof your skills and jumpstart your journey into the world of Artificial Intelligence. Internship Highlights Duration: 6 Weeks | Mode: Hybrid (Live Online + Self-paced) Weekly Commitment: Just 6–8 hours/week Open to: All B.Tech students – 1st to 4th year, across all branches By the end of this program, you’ll be able to: Launch an AI-powered web app in just 10 minutes using tools like Streamlit or Gradio. Use 10+ industry-grade AI tools like Cursor, Lovable, Gamma & more. Build real-world AI projects like facial recognition and image classification — even if you’re new to AI. Train & deploy your own ML model using real-world datasets without needing a PhD. Create your own AI assistant chatbot for productivity, personal use, or college life. Make 50+ slide decks in 30 mins using AI tools like Canva AI and Gamma. Code smarter with AI co-pilots like GitHub Copilot, Replit, and more. Top Performers Will Receive : Letter of Recommendation + Certificate from Mirai School of Technology Access to exclusive placement opportunities Chance to earn ₹1,00,000 seed funding for your AI startup idea Apply now at : https://forms.gle/25r4NURaVorme3Gq7 For more details visit : https://msot.org/internship Show more Show less
Posted 2 days ago
6.0 years
60 - 65 Lacs
Agra, Uttar Pradesh, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
3.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Data Scientist Job Description Position: Data Scientist Experience Level: [Junior/Mid-level] - 3-5 years Job Summary We are seeking a Data Scientist to join our team to analyze complex datasets, develop machine learning models, and provide data-driven insights to support business decisions. Key Responsibilities Data Analysis & Exploration Perform comprehensive exploratory data analysis (EDA) on large datasets. Identify patterns, trends, and anomalies in data. Conduct statistical analysis to validate business hypotheses. Create data visualizations to communicate findings effectively. Assess and ensure data quality and integrity. Write complex SQL queries to extract and manipulate data Machine Learning & Modeling Design and develop machine learning models for business problems like (XG Boost, Logistic Regression, DNN, RNN etc) Implement supervised and unsupervised learning algorithms Perform feature engineering and selection Evaluate model performance using appropriate metrics Deploy and monitor machine learning models in production Programming & Development Develop data analysis scripts and automation tools using Python Build data pipelines and ETL processes Create reusable code libraries and functions Maintain version control and documentation standards Required Qualifications Technical Skills SQL: Advanced proficiency in writing complex queries, joins, subqueries, and database optimization Python: Strong programming skills in Python for data analysis and machine learning Exploratory Data Analysis: Expertise in EDA techniques, statistical analysis, and data visualization Machine Learning: Solid understanding of ML algorithms, model evaluation, and validation techniques Statistics: Knowledge of statistical methods, hypothesis testing, and experimental design Knowledge of any cloud like AWS, GCP or Azure is good to have Familiarity with version control systems Experience with containerization and deployment tools Good to Have:- Worked on GenAI based Projects Using GenAI for driving productivity in your work. Knowledge of PySpark is a plus Show more Show less
Posted 2 days ago
7.0 years
40 Lacs
Chennai, Tamil Nadu, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
6.0 years
60 - 65 Lacs
Coimbatore, Tamil Nadu, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
6.0 years
60 - 65 Lacs
Chennai, Tamil Nadu, India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: ML, Python Crop.Photo is Looking for: We’re looking for a hands-on engineering lead to own the delivery of our GenAI-centric product from the backend up to the UI — while integrating visual AI pipelines built by ML engineers. You’ll be both a builder and a leader: writing clean Python, Java and TypeScript, scaling AWS-based systems, mentoring engineers, and making architectural decisions that stand the test of scale. You won’t be working in a silo — this is a role for someone who thrives in fast-paced, high-context environments with product, design, and AI deeply intertwined. (Note: This role requires both technical mastery and leadership skills - we're looking for someone who can write production code, make architectural decisions, and lead a team to success.) What You’ll Do Lead development of our Java, Python (FastAPI), and Node.js backend services on AWS Deploy ML pipelines (built by the ML team) into containerized inference workflows using FastAPI, Docker, and GPU-enabled ECS EC2. Deploy and manage services on AWS ECS/Fargate, Lambda, API Gateway, and GPU-powered EC2 Contribute to React/TypeScript frontend when needed to accelerate product delivery Work closely with the founder, product, and UX team to translate business needs into working product Make architecture and infrastructure decisions — from media processing to task queues to storage Own the performance, reliability, and cost-efficiency of our core services Hire and mentor junior/mid engineers over time Drive technical planning, sprint prioritization, and trade-off decisions A customer-centric approach — you think about how your work affects end users and product experience, not just model performance A quest for high-quality deliverables — you write clean, tested code and debug edge cases until they’re truly fixed The ability to frame problems from scratch and work without strict handoffs — you build from a goal, not a ticket Skills & Experience We Expect Core Engineering Experience 6–8 years of professional software engineering experience in production environments 2–3 years of experience leading engineering teams of 5+ engineers Cloud Infrastructure & AWS Expertise (5+ years) Deep experience with AWS Lambda, ECS, and container orchestration tools Familiarity with API Gateway and microservices architecture best practices Proficient with S3, DynamoDB, and other AWS-native data services CloudWatch, X-Ray, or similar tools for monitoring and debugging distributed systems Strong grasp of IAM, roles, and security best practices in cloud environments Backend Development (5–7 years) Java: Advanced concurrency, scalability, and microservice design Python: Experience with FastAPI, building production-grade MLops pipelines Node.js & TypeScript: Strong backend engineering and API development Deep understanding of RESTful API design and implementation Docker: 3+ years of containerization experience for building/deploying services Hands-on experience deploying ML inference pipelines (built by ML team) using Docker, FastAPI, and GPU-based AWS infrastructure (e.g., ECS, EC2) — 2+ years System Optimization & Middleware (3–5 years) Application performance optimization and AWS cloud cost optimization Use of background job frameworks (e.g., Celery, BullMQ, AWS Step Functions) Media/image processing using tools like Sharp, PIL, Imagemagick, or OpenCV Database design and optimization for low-latency and high-availability systems Frontend Development (2–3 years) Hands-on experience with React and TypeScript in modern web apps Familiarity with Redux, Context API, and modern state management patterns Comfortable with modern build tools, CI/CD, and frontend deployment practices System Design & Architecture (4–6 years) Designing and implementing microservices-based systems Experience with event-driven architectures using queues or pub/sub Implementing caching strategies (e.g., Redis, CDN edge caching) Architecting high-performance image/media pipelines Leadership & Communication (2–3 years) Proven ability to lead engineering teams and drive project delivery Skilled at writing clear and concise technical documentation Experience mentoring engineers, conducting code reviews, and fostering growth Track record of shipping high-impact products in fast-paced environments Strong customer-centric and growth-oriented mindset, especially in startup settings — able to take high-level goals and independently drive toward outcomes without requiring constant handoffs or back-and-forth with the founder Proactive in using tools like ChatGPT, GitHub Copilot, or similar AI copilots to improve personal and team efficiency, remove blockers, and iterate faster How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
India has seen a significant rise in the demand for Machine Learning (ML) professionals in recent years. With the growth of technology companies and increasing adoption of AI-driven solutions, the job market for ML roles in India is thriving. Job seekers with expertise in ML have a wide range of opportunities to explore in various industries such as IT, healthcare, finance, e-commerce, and more.
The average salary range for Machine Learning professionals in India varies based on experience levels. Entry-level positions such as ML Engineers or Data Scientists can expect salaries starting from INR 6-8 lakhs per annum. With experience, Senior ML Engineers or ML Architects can earn upwards of INR 15-20 lakhs per annum.
The career progression in Machine Learning typically follows a path from Junior Data Scientist or ML Engineer to Senior Data Scientist, ML Architect, and eventually to a Tech Lead or Chief Data Scientist role.
In addition to proficiency in Machine Learning, professionals in this field are often expected to have skills in:
As you explore job opportunities in Machine Learning in India, remember to hone your skills, stay updated with the latest trends in the field, and approach interviews with confidence. With the right preparation and mindset, you can land your dream ML job and contribute to the exciting world of AI and data science. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.