Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 years
25 Lacs
Guwahati, Assam, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
1.0 years
25 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
1.0 years
25 Lacs
Raipur, Chhattisgarh, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
1.0 years
25 Lacs
Jamshedpur, Jharkhand, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
1.0 years
25 Lacs
Amritsar, Punjab, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description We are looking for an experienced LLM Engineer with 3 to 6 years of software development experience and at least 1-2 years of hands-on expertise in building LLM-based solutions. The ideal candidate will have strong skills in Python, LLM development tools (RAG, Vector Databases, Agentic workflows, LoRA, etc.), cloud platforms (AWS, Azure, GCP), DevOps, and full-stack development. Responsibilities: Design and develop scalable, distributed software systems using modern architecture patterns. Lead the development of LLM-based applications using RAG, Vector DB, Agentic workflows, LoRA, QLoRA, and related technologies. Translate business requirements into technical solutions and drive technical execution. Build, deploy, and maintain LLM pipelines integrated with APIs and cloud platforms. Implement DevOps practices using Docker, Kubernetes, and automate CI/CD pipelines. Work with cloud services such as AWS, Azure, or GCP for deploying scalable applications. Collaborate with cross-functional teams, perform code reviews, and follow best engineering practices. Develop APIs and backend services using Python (FastAPI, Django) with secure authentication (JWT, Azure AD, IDM). Contribute to front-end development using ReactJS, NextJS, Tailwind CSS (preferred). Utilize LLM APIs (OpenAI, Anthropic, AWS Bedrock) and SDKs (LangChain, DSPy) for application development. Ensure application security, performance, scalability, and compliance with privacy standards. Follow Agile methodology for continuous development and delivery. Skills & Qualifications: Bachelor’s degree in Computer Science, IT, or related field (or equivalent experience). 5+ years of software development experience, including at least 1-2 years building LLM solutions. Proficient in Python and JavaScript. Strong experience with LLM patterns like RAG, Vector Databases, Hybrid Search, Agent development, Prompt Engineering, Agentic workflows, etc. Knowledge of API development using FastAPI, Django, and WebSockets, gRPC. Familiarity with access management using JWT, Azure AD, IDM. Hands-on experience with LLM APIs (OpenAI, Anthropic, AWS Bedrock) and SDKs (LangChain, DSPy). Experience with cloud platforms such as AWS, Azure, GCP including IAM, Monitoring, Load Balancing, Autoscaling, Networking, Database, Storage, ECR, AKS, ACR. Experience with DevOps tools: Docker, Kubernetes, CI/CD pipelines, automation scripts. Exposure to front-end frameworks like ReactJS, NextJS, Tailwind CSS (preferred). Experience deploying production-grade LLM applications for large user bases. Strong knowledge of software engineering practices (Git, version control, Agile/DevOps). Excellent communication skills with the ability to explain complex concepts clearly. Strong understanding of scalable system design, security best practices, and compliance standards. Familiarity with SDLC processes and Agile product development cycles.
Posted 2 months ago
1.0 years
25 Lacs
Jaipur, Rajasthan, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
1.0 years
25 Lacs
Greater Lucknow Area
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
1.0 years
25 Lacs
Thane, Maharashtra, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
1.0 years
25 Lacs
India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
1.0 years
25 Lacs
Nashik, Maharashtra, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
1.0 years
25 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
1.0 years
25 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description We are looking for an experienced LLM Engineer with 3 to 6 years of software development experience and at least 1-2 years of hands-on expertise in building LLM-based solutions. The ideal candidate will have strong skills in Python, LLM development tools (RAG, Vector Databases, Agentic workflows, LoRA, etc.), cloud platforms (AWS, Azure, GCP), DevOps, and full-stack development. Responsibilities: Design and develop scalable, distributed software systems using modern architecture patterns. Lead the development of LLM-based applications using RAG, Vector DB, Agentic workflows, LoRA, QLoRA, and related technologies. Translate business requirements into technical solutions and drive technical execution. Build, deploy, and maintain LLM pipelines integrated with APIs and cloud platforms. Implement DevOps practices using Docker, Kubernetes, and automate CI/CD pipelines. Work with cloud services such as AWS, Azure, or GCP for deploying scalable applications. Collaborate with cross-functional teams, perform code reviews, and follow best engineering practices. Develop APIs and backend services using Python (FastAPI, Django) with secure authentication (JWT, Azure AD, IDM). Contribute to front-end development using ReactJS, NextJS, Tailwind CSS (preferred). Utilize LLM APIs (OpenAI, Anthropic, AWS Bedrock) and SDKs (LangChain, DSPy) for application development. Ensure application security, performance, scalability, and compliance with privacy standards. Follow Agile methodology for continuous development and delivery. Skills & Qualifications: Bachelor’s degree in Computer Science, IT, or related field (or equivalent experience). 5+ years of software development experience, including at least 1-2 years building LLM solutions. Proficient in Python and JavaScript. Strong experience with LLM patterns like RAG, Vector Databases, Hybrid Search, Agent development, Prompt Engineering, Agentic workflows, etc. Knowledge of API development using FastAPI, Django, and WebSockets, gRPC. Familiarity with access management using JWT, Azure AD, IDM. Hands-on experience with LLM APIs (OpenAI, Anthropic, AWS Bedrock) and SDKs (LangChain, DSPy). Experience with cloud platforms such as AWS, Azure, GCP including IAM, Monitoring, Load Balancing, Autoscaling, Networking, Database, Storage, ECR, AKS, ACR. Experience with DevOps tools: Docker, Kubernetes, CI/CD pipelines, automation scripts. Exposure to front-end frameworks like ReactJS, NextJS, Tailwind CSS (preferred). Experience deploying production-grade LLM applications for large user bases. Strong knowledge of software engineering practices (Git, version control, Agile/DevOps). Excellent communication skills with the ability to explain complex concepts clearly. Strong understanding of scalable system design, security best practices, and compliance standards. Familiarity with SDLC processes and Agile product development cycles.
Posted 2 months ago
8.0 years
10 - 24 Lacs
Bhubaneswar, Odisha, India
On-site
We are hiring a seasoned Site Reliability Engineer with strong experience in building and operating scalable systems on Google Cloud Platform (GCP). You will be responsible for ensuring system availability, performance, and security in a complex microservices ecosystem, while collaborating cross-functionally to improve infrastructure reliability and developer velocity. Key Responsibilities - Design and maintain highly available, fault-tolerant systems on GCP using SRE best practices. - Implement SLIs/SLOs, monitor error budgets, and lead post-incident reviews with RCA documentation. - Automate infrastructure provisioning (Terraform/Deployment Manager) and CI/CD workflows. - Operate and optimize Kubernetes (GKE) clusters including autoscaling, resource tuning, and HPA policies. - Integrate observability across microservices using Prometheus, Grafana, Stackdriver, and OpenTelemetry. - Manage and fine-tune databases (MySQL/Postgres/BigQuery/Firestore) for performance and cost. - Improve API reliability and performance through Apigee (proxy tuning, quota/policy handling, caching). - Drive container best practices including image optimization, vulnerability scanning, and registry hygiene. - Participate in on-call rotations, capacity planning, and infrastructure cost reviews. Must-Have Skills - Minimum 8 years of total experience, with at least 3 years in SRE, DevOps, or Platform Engineering roles. - Strong expertise in GCP services (GKE, IAM, Cloud Run, Cloud Functions, Pub/Sub, VPC, Monitoring). - Advanced Kubernetes knowledge: pod orchestration, secrets management, liveness/readiness probes. - Experience in writing automation tools/scripts in Python, Bash, or Go. - Solid understanding of incident response frameworks and runbook development. - CI/CD expertise with GitHub Actions, Cloud Build, or similar tools. Good to Have - Apigee hands-on experience: API proxy lifecycle, policies, debugging, and analytics. - Database optimization: index tuning, slow query analysis, horizontal/vertical sharding. - Distributed monitoring and tracing: familiarity with Jaeger, Zipkin, or GCP Trace. - Service Mesh (Istio/Linkerd) and secure workload identity configurations. - Exposure to BCP/DR planning, infrastructure threat modeling, and compliance (ISO/SOC2). Educational & Certification Requirements - B.Tech / M.Tech / MCA in Computer Science or equivalent. - GCP Professional Cloud DevOps Engineer / Kubernetes Administrator certification (preferred). Skills: postgres,prometheus,database,bigquery,mysql,opentelemetry,devops,kubernetes,ci/cd,stackdriver,google cloud platform (gcp),ansible,go,bash,grafana,firestore,github actions,terraform,cloud,cicd,python,cloud build,kubernetes (gke),apigee
Posted 2 months ago
12.0 years
0 Lacs
India
Remote
Azure Infrastructure Engineer Remote India Role Overview: We're seeking a senior Azure Infrastructure Engineer with 8–12 years of deep hands-on experience in building, deploying, and operating cloud-native infrastructure. You’ll be responsible for core components like AKS, Terraform, Docker, Helm, KEDA, HPA, Istio/service mesh, CI/CD pipelines, Azure networking, and disaster recovery. Key Responsibilities: Operate and troubleshoot production AKS clusters Build and deploy workloads using Docker and Helm Automate infra provisioning with Terraform Configure autoscaling using KEDA and HPA Manage Istio or equivalent service mesh (ingress, routing, mTLS) Maintain robust CI/CD pipelines (Azure DevOps/GitHub Actions) Handle complex Azure networking (VNet, NSG, DNS, LB, peering) Support and execute disaster recovery procedures Thanks & Regards, Prakash Pandey ---------------------------------------------- Sr. Technical Recruiter ITMC Systems, Inc Cell: + 1 (973) 348-6836 / +91 8294988910 Email: Prakash@itmcsystems.com https://www.linkedin.com/in/prakash-pandey-7b827524a 181 New Road, suite 304, Parsippany, NJ-07054 www.itmcsystems.com // USA. CANADA. INDIA
Posted 2 months ago
6.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specialising in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. We are looking for an SDE 3 – Fullstack responsible for leading and mentoring a team of full-stack developers to build scalable, high-performance applications. Your primary focus will be on developing robust, maintainable, and efficient software solutions that power our platform. You will be responsible for designing and optimizing frontend and backend systems, ensuring seamless integration and high availability of services. What will you do at Fynd? Build scalable and loosely coupled services to extend our platform. Full-Spectrum Ownership: Act as both the Solution Architect and Hands-on Developer, driving design, build, and deployment of scalable systems. Stakeholder Engagement: Collaborate closely with stakeholders from Reliance Industries (RIL), including Product, Business, Operations, and Legacy System Owners, to align on requirements, timelines, and delivery strategy. System Integration Leadership: Design solutions that bridge modern platforms with existing legacy systems, ensuring continuity and scalability. Project Leadership: Take complete ownership of project lifecycle — from discovery and architecture through execution and release. Build bulletproof API integrations with third-party APIs for various use cases. Evolve our infrastructure and enhance availability and performance. Have full autonomy to own your code, decide on technologies, and operate large-scale applications on AWS. Mentor and lead a team of full-stack engineers, fostering a culture of innovation and collaboration. Optimize frontend and backend performance, caching mechanisms, and API integrations. Implement and enforce security best practices to safeguard applications and user data. Stay up to date with emerging full-stack technologies and evaluate their potential impact on our tech stack. Contribute to the open-source community through code contributions and blog posts. Some Specific Requirements 6+ years of development experience in full-stack development with expertise in JavaScript, React.js, Node.js, and Python. Proven experience building and scaling systems at an SDE2/SDE3 level or beyond Strong communication and alignment skills with non-technical stakeholders Ability to architect and execute complex solutions independently Experience in multi-system integration, particularly within large enterprise contexts Prior experience developing and working on consumer-facing web/app products. Expertise in backend frameworks such as Express.js, Koa.js, or Socket.io. Strong understanding of async programming using callbacks, promises, and async/await. Hands-on experience with frontend technologies HTML, CSS, AJAX, and React.js. Working knowledge of MongoDB, Redis, MySQL. Solid understanding of data structures, algorithms, and operating systems. Experience with AWS services such as EC2, ELB, AutoScaling, CloudFront, and S3. Experience with CI/CD pipelines, containerization (Docker, Kubernetes), and DevOps practices. Ability to troubleshoot complex full-stack issues and drive continuous improvements. Good understanding of GraphQL, WebSockets, and real-time applications is a plus. Experience with Vue.js would be an added advantage. What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 2 months ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Summary We’re seeking a hands-on GenAI & Computer Vision Engineer with 3–5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model development—from research and fine-tuning through deployment, monitoring, and iteration. In this role, you’ll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challenges—augmentation, domain adaptation, semi-supervised learning—and mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally Bachelor’s or Master’s in Computer Science, Electrical Engineering, AI/ML, or a related field 3–5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges You’ll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In Our Website - https://aurigait.com/
Posted 2 months ago
3.0 years
3 - 6 Lacs
Jaipur
On-site
Job Summary We’re seeking a hands-on GenAI & Computer Vision Engineer with 3–5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model development—from research and fine-tuning through deployment, monitoring, and iteration. In this role, you’ll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challenges—augmentation, domain adaptation, semi-supervised learning—and mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally: Bachelor’s or Master’s in Computer Science, Electrical Engineering, AI/ML, or a related field 3–5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges You’ll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In https://www.aurigait.com/ -https://aurigait.com/https://aurigait.com
Posted 2 months ago
6.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Azure virtual desktop SME. Experience of 6-10 Years in the following areas. Good understanding of VDI technologies Azure Virtual Desktop Hands-on experience of deployment AVD In-depth knowledge of Azure services - AAD, AADS, RBAC, Storage, Policies, Backup, Recovery Service Vault, Azure Firewall, Private Link, UDR, Security, Azure File share, AVD Autoscaling, & Azure Monitor. Knowledge of Group Policy, Active Directory and Registry settings., Security concepts Create, customize and manage AVD Images also must have knowledge of AIB (Azure Image Builder) and Azure compute gallery management. Good understanding of Profile management with Fslogix Good Experience of Deploying and Managing Host Pools and session host Good hands-on experience on Microsoft MSIX packaging, AppMasking Must have knowledge on Azure Services – Storage, Azure File Share, Backup, Policies, EntraID, Azure Vnet & NSG, Azure Monitoring Strong hands-on DevOps, Azure DevOps YML pipelines & Infrastructure-as-Code (IAC) such as ARM and Bicep. Monitor the CI/CD pipelines and make basic improvements as needed and should be able to Provide support for infrastructure-related issues in the CI/CD process. Develop and maintain automation scripts using tools like PowerShell scripting, Azure Functions, Automation account, CLI, and ARM templates. Troubleshooting skill required for AVD & Windows related issues and BAU support Use Azure Monitor, Log Analytics, and other tools to gain insights into system health. Respond to incidents and outages promptly to minimize downtime.
Posted 2 months ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. What will you do at Fynd? Build scalable and loosely coupled services to extend our Gauze medical product. Build bulletproof API integrations with third-party APIs for various use cases. Evolve our infrastructure and add a few more nines to our overall availability. Have full autonomy and own your code, and decide on the technologies and tools to deliver as well as operate large-scale applications on GCP. Some Specific Requirements 3+ years of development experience in full-stack development with expertise in JavaScript, React.js, Node.js, and Python. You have prior experience developing and working on consumer-facing or medical web/app products. Solid backend engineering experience with Node.js and Python, including exposure to web frameworks like Express or FastAPI. Working knowledge of PostgreSQL, MongoDB, and Redis. Good understanding of Data Structures, Algorithms, and Operating Systems. You've worked with core GCP/AWS services in the past and have experience with Compute Engine, Cloud Storage, SQL, AutoScaling, and MemoryStore. Familiarity with distributed systems and tooling such as Kafka, Docker, Kubernetes, and Temporal. Have knowledge of Elasticsearch, Full-text search. Attention to detail and commitment to writing clean, maintainable code. You can dabble in front-end codebases using HTML, CSS, and JavaScript. Familiarity with LLMs (Gemini, OpenAI), the LangGraph framework, and MCP. Good to have knowledge of medical standards (FHIR, DICOMweb, Hospital Information Systems). You love doing things efficiently. At Fynd, the work you do will have a disproportionate impact on the business. We believe in systems and processes that let us scale our impact to be larger than ourselves. You might not have experience with all the tools that we use but you can learn those given the guidance and resources. What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 2 months ago
3.0 years
6 Lacs
Mumbai
On-site
Fynd is India’s largest omnichannel platform and a multi-platform tech company specializing in retail technology and products in AI, ML, big data, image editing, and the learning space. It provides a unified platform for businesses to seamlessly manage online and offline sales, store operations, inventory, and customer engagement. Serving over 2,300 brands, Fynd is at the forefront of retail technology, transforming customer experiences and business processes across various industries. What will you do at Fynd? Build scalable and loosely coupled services to extend our Gauze medical product. Build bulletproof API integrations with third-party APIs for various use cases. Evolve our infrastructure and add a few more nines to our overall availability. Have full autonomy and own your code, and decide on the technologies and tools to deliver as well as operate large-scale applications on GCP. Some specific Requirements 3+ years of development experience in full-stack development with expertise in JavaScript, React.js, Node.js, and Python . You have prior experience developing and working on consumer-facing or medical web/app products. Solid backend engineering experience with Node.js and Python , including exposure to web frameworks like Express or FastAPI . Working knowledge of PostgreSQL, MongoDB, and Redis. Good understanding of Data Structures, Algorithms, and Operating Systems. You've worked with core GCP/AWS services in the past and have experience with Compute Engine, Cloud Storage, SQL, AutoScaling, and MemoryStore. Familiarity with distributed systems and tooling such as Kafka , Docker , Kubernetes , and Temporal . Have knowledge of Elasticsearch, Full-text search. Attention to detail and commitment to writing clean, maintainable code. You can dabble in front-end codebases using HTML, CSS, and JavaScript. Familiarity with LLMs (Gemini, OpenAI), the LangGraph framework, and MCP. Good to have knowledge of medical standards (FHIR, DICOMweb, Hospital Information Systems). You love doing things efficiently. At Fynd, the work you do will have a disproportionate impact on the business. We believe in systems and processes that let us scale our impact to be larger than ourselves. You might not have experience with all the tools that we use but you can learn those given the guidance and resources. What do we offer? Growth Growth knows no bounds, as we foster an environment that encourages creativity, embraces challenges, and cultivates a culture of continuous expansion. We are looking at new product lines, international markets and brilliant people to grow even further. We teach, groom and nurture our people to become leaders. You get to grow with a company that is growing exponentially. Flex University We help you upskill by organising in-house courses on important subjects Learning Wallet: You can also do an external course to upskill and grow, we reimburse it for you. Culture Community and Team building activities Host weekly, quarterly and annual events/parties. Wellness Mediclaim policy for you + parents + spouse + kids Experienced therapist for better mental health, improve productivity & work-life balance We work from the office 5 days a week to promote collaboration and teamwork. Join us to make an impact in an engaging, in-person environment!
Posted 2 months ago
5.0 years
0 Lacs
Surat, Gujarat, India
On-site
Position : Lead Software Engineer ✅ Key Responsibilities 🚀 Architecture & System Design · Define scalable, secure, and modular architectures. · Implement high-availability patterns (circuit breakers, autoscaling, load balancing). · Enforce OWASP best practices, role-based access, and GDPR/PIPL compliance. 💻 Full-Stack Development · Oversee React Native & React.js codebases; mentor on state management (Redux/MobX). · Architect backend services with Node.js/Express; manage real-time layers (WebSocket, Socket.io). · Integrate third-party SDKs (streaming, ads, offerwalls, blockchain). 📈 DevOps & Reliability · Own CI/CD pipelines and Infrastructure-as-Code (Terraform/Kubernetes). · Drive observability (Grafana, Prometheus, ELK); implement SLOs and alerts. · Conduct load testing, capacity planning, and performance optimization. 👥 Team Leadership & Delivery · Mentor 5–10 engineers, lead sprint planning, code reviews, and Agile ceremonies. · Collaborate with cross-functional teams to translate roadmaps into deliverables. · Ensure on-time feature delivery and manage risk logs. 🔍 Innovation & Continuous Improvement · Evaluate emerging tech (e.g., Layer-2 blockchain, edge computing). · Improve development velocity through tools (linters, static analysis) and process optimization. 📌 What You’ll Need · 5+ years in full-stack development, 2+ years in a lead role · Proficient in: React.js, React Native, Node.js, Express, AWS, Kubernetes · Strong grasp of database systems (PostgreSQL, Redis, MongoDB) · Excellent communication and problem-solving skills · Startup or gaming experience a bonus 🎯 Bonus Skills · Blockchain (Solidity, smart contracts), streaming protocols (RTMP/HLS) · Experience with analytics tools (Redshift, Metabase, Looker) · Prior exposure to monetization SDKs (PubScale, AdX)
Posted 2 months ago
6.0 years
0 Lacs
Bengaluru, Karnataka
On-site
GeekyAnts India Pvt Ltd Services 251 - 500 Employees 4.5 Reviews Bengaluru, Karnataka Location About company GeekyAnts is a design and development studio that specializes in building solutions for web and mobile that drive innovation and transform industries and lives. They hold expertise in state-of-the-art technologies like React, React Native, Flutter, Angular, Vue, NodeJS, Python, Svelte and more. GeekyAnts has worked with around 500+ clients all across the globe, delivering tailored solutions to a wide array of industries like Healthcare, Finance, Education, Banking, Gaming, Manufacturing, Real Estate and more. They are trusted tech partners of some of the world's top corporate giants and have helped small to mid-sized companies realize their vision and transform digitally. They are also the registered service suppliers for Google LLC since 2017. They provide services ranging from Web & Mobile Development, UI/UX design, Business Analysis, Product Management, DevOps, QA, API Development, Delivery & Support and more. In addition to that, GeekyAnts is the brains behind React Native's most famous UI library; NativeBase (15000+ GitHub Stars), BuilderX, Vue Native, Flutter Starter, apibeats and hold numerous other Open Source contributions to their name. GeekyAnts has offices in India (Bangalore) and the UK (London) 5 vacancy Senior Software Engineer III Posted 7 months ago Not Disclosed Salary 5+ Years Experience Bengaluru, Karnataka Location Job Description We are seeking an experienced Senior Software Engineer - III who thrives in a tech-agnostic, cloud-first environment and can architect scalable backend systems with confidence and clarity. Your expertise in backend engineering principles, system design, and performance optimization at scale will be the cornerstone of your success—not your familiarity with any one programming language. This role requires strategic thinking, hands-on backend capability, and the ability to make technology decisions aligned with business goals. If you choose the right tools for the problem—not based on preference but based on what scales—we want to talk to you. Responsibilities Architect and lead the design of scalable, resilient, and cloud-native backend systems Optimize backend systems for performance, reliability, and cost-efficiency, particularly in high-scale environments Make cloud-first design decisions, leveraging the best of AWS and modern cloud architectures (e.g., serverless, container orchestration, managed services) Collaborate with product, design, and engineering teams to translate business goals into technical solutions Evaluate and introduce technologies based on project needs—not personal preferences Champion engineering excellence, including clean code, reusable design patterns, and robust APIs Conduct technical design and code reviews, mentor developers, and drive continuous improvement Communicate effectively across engineering and non-engineering stakeholders, breaking down complex ideas simply Required Skills 6+ years of backend development experience with strong system-level thinking Proven track record in system design, architecture decisions, and design trade-offs Strong understanding of performance tuning, distributed systems, data modeling, and API design Experience working across multiple tech stacks or programming languages Ability to quickly adapt to any backend framework, language, or ecosystem Deep experience with AWS and cloud-native architecture principles (e.g., stateless services, managed databases, autoscaling, serverless) Proficiency in DevOps, CI/CD pipelines, containerization (e.g., Docker, ECS/EKS) Experience optimizing systems for throughput, latency, and scalability at production scale Familiarity with Spring Boot is a strong plus Educational Qualifications B.Tech / B.E. degree in Computer Science Rounds description [One-to-One In-person Interview] You will be talking directly with the HR Team at GeekyAnts for Communication Assessments & Review. HR Discission
Posted 2 months ago
5.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Senior SRE (Engineering & Reliability) Job Summary: We are seeking an experienced and dynamic Site Reliability Engineering (SRE) Lead to oversee the reliability, scalability, and performance of our critical systems. As an SeniorSRE, you will play a pivotal role in establishing and implementing SRE practices, leading a team of engineers, and driving automation, monitoring, and incident response strategies. This position combines software engineering and systems engineering expertise to build and maintain high-performing, reliable systems. Experience: 5-10 years Key Responsibilities: Reliability & Performance: • Lead efforts to maintain high availability and reliability of critical services. • Define and monitor SLIs, SLOs, and SLAs to ensure business requirements are met. • Proactively identify and resolve performance bottlenecks and system inefficiencies. Incident Management & Response: • Establish and improve incident management processes and on-call rotations. • Lead incident response and root cause analysis for high-priority outages. • Drive post-incident reviews and ensure actionable insights are implemented. Automation & Tooling: • Develop and implement automated solutions to reduce manual operational tasks. • Enhance system observability through metrics, logging, and distributed tracing tools (e.g., Prometheus, Grafana, Elastic APM). • Optimize CI/CD pipelines for seamless deployments. Collaboration: • Partner with software engineering teams to improve the reliability of applications and infrastructure. • Work closely with product/ engineering teams to design scalable and robust systems. • Ensure seamless integration of monitoring and alerting systems across teams. Leadership & Team Building: • Manage, mentor, and grow a team of SREs. • Promote SRE best practices and foster a culture of reliability and performance across the organization. • Drive performance reviews, skills development, and career progression for team members. Capacity Planning & Cost Optimization: • Perform capacity planning and implement autoscaling solutions to handle traffic spikes. • Optimize infrastructure and cloud costs while maintaining reliability and performance. Skills & Qualifications: Required Skills: • Technical Expertise: o Experience with cloud platforms (AWS / Azure / GCP) and Kubernetes. o Hands-on knowledge of infrastructure-as-code tools like Terraform /Helm/ Ansible. o Proficiency in Java o Expertise in distributed systems, databases, and load balancing. • Monitoring & Observability: o Proficient with tools like Prometheus, Grafana,, Elastic APM, or New relic. o Understanding of metrics-driven approaches for system monitoring and alerting. • Automation & CI/CD: o Hands-on experience with CI/CD pipelines (e.g., Jenkins, Azure Pipelines etc). o Skilled in automation frameworks and tools for infrastructure and application deployments. • Incident Management: o Proven track record in handling incidents, post-mortems, and implementing solutions to prevent recurrence. Leadership & Communication Skills: • Strong people management and leadership skills with the ability to inspire and motivate teams. • Excellent problem-solving and decision-making skills. • Clear and concise communication, with the ability to translate technical concepts for non-technical stakeholders. Preferred Qualifications: • Experience with database optimization, Kafka, or other messaging systems. • Knowledge of autoscaling techniques • Previous experience in an SRE, DevOps, or infrastructure engineering leadership role. • Understanding of compliance and security best practices in distributed systems.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |