Home
Jobs
Companies
Resume

2333 Latency Jobs - Page 50

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Build the AI Reasoning Layer for Education We’re reimagining the core intelligence layer for education —tackling one of the most ambitious challenges in AI: subjective assessment automation and ultra-personalized learning at scale. This isn’t just another LLM application. We’re building a first-principles AI reasoning engine combining multi-modal learning, dynamic knowledge graphs, and real-time content generation . The goal? To eliminate billions of wasted hours in manual evaluation and create an AI that understands how humans learn . As a Founding AI Engineer , you’ll define and build this system from the ground up. You’ll work on problems few have attempted, at the bleeding edge of LLMs, computer vision, and generative reasoning. What You’ll Be Solving: Handwriting OCR at near-human accuracy: How can we push vision-language models to understand messy, real-world input from students? Real-time learner knowledge modeling: Can AI track and reason about what someone knows—and how they’re learning—moment to moment? Generative AI that teaches: How do we create dynamic video lessons that evolve in sync with a learner’s knowledge state? Scalable inference infrastructure: How do we optimize LLMs and multimodal models to support millions of learners in real time? What You’ll Be Building: Architect, deploy & optimize multi-modal AI systems—OCR, knowledge-state inference, adaptive content generation. Build reasoning engines that combine LLMs, retrieval, and learner data to dynamically guide learning. Fine-tune foundation models (LLMs, VLMs) and implement cutting-edge techniques (quantization, LoRA, RAG, etc.). Design production-grade AI systems: modular, scalable, and optimized for inference at global scale. Lead experiments at the frontier of AI research, publishing if desired. Tech Stack & Skills Must-Have: Deep expertise in AI/ML, with a focus on LLMs, multi-modal learning, and computer vision. Hands-on experience with OCR fine-tuning and handwritten text recognition Strong proficiency in AI frameworks: PyTorch, TensorFlow, Hugging Face, OpenCV. Experience in optimizing AI for production: LLM quantization, retrieval augmentation, and MLOps. Knowledge graphs and AI-driven reasoning systems experience Nice-to-Have: Experience with Diffusion Models, Transformers, and Graph Neural Networks (GNNs). Expertise in vector databases, real-time inference pipelines, and low-latency AI deployment. Prior experience in ed-tech, adaptive learning AI, or multi-modal content generation. Why This Role Is Rare Define the AI stack for a category-defining product at inception. Work with deep ownership across research, engineering, and infrastructure. Founding-level equity and influence in a high-growth company solving a $100B+ problem. Balance of cutting-edge research and real-world deployment. Solve problems that matter —not just academically, but in people’s lives. Who this role is for This is for builders at the edge—engineers who want to architect, not just optimize. Researchers who want their ideas shipped.If you want to: Push LLMs, CV, and multimodal models to their performance limits. Build AI that learns, reasons, and adapts like a human tutor. Shape the foundational AI layer for education Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Java Fullstack Developer 📍 Location: Remote (India) 🕒 Employment Type: Full-Time 💼 Experience: 8+ years 📅 Start Date: ASAP 💻 Department: Engineering / Technology Role Overview: We are seeking a Senior Java Fullstack Developer with strong hands-on experience in Java (17/21) and modern microservices-based architectures. You’ll be working on complex, high-availability trading and financial systems with a focus on scalability, performance, and security. The ideal candidate has experience across the backend and frontend stack, with solid knowledge of React, event-driven architecture, and cloud-native deployment using tools like Docker, Kubernetes, and AWS. Domain knowledge in Trading, Post Trade, Settlement, or Market Risk is a major advantage and will help you contribute meaningfully to business-critical applications. Key Responsibilities: Design, develop, and maintain scalable microservices using Java 17/21 Build robust RESTful APIs and event-driven systems using Kafka or similar technologies Contribute to frontend development using React and JavaScript/TypeScript Optimize high-throughput, low-latency systems for performance and reliability Collaborate with QA, DevOps, and product teams to deliver features in an agile environment Ensure infrastructure-as-code and containerization best practices using Docker, Kubernetes, and AWS Develop and manage CI/CD pipelines using GitLab, Jenkins, and Gradle Participate in code reviews, technical design sessions, and system architecture planning Maintain clean, modular, and well-tested codebase using modern engineering practices Required Skills & Experience: 8+ years of professional experience in software development Strong expertise in Java (17/21) and modern Java frameworks (Spring Boot, etc.) Proven experience with microservices and event-driven architecture Hands-on experience with RESTful APIs and NoSQL databases (MongoDB, DynamoDB, etc.) Proficient with React, JavaScript, and component-based front-end architectures Strong experience with Gradle, Jenkins, GitLab CI/CD pipelines Familiar with Docker, Kubernetes (k8s), k9s, and deploying high-availability services Working knowledge of cloud platforms, especially AWS (ECS, EKS, S3, etc.) Excellent debugging, performance tuning, and system design skills Comfortable working in Agile/Scrum environments with distributed teams Preferred Qualifications: Prior experience in Trading, Post Trade, Settlement, or Market Risk platforms Understanding of financial regulations, compliance needs, and low-latency trading infrastructure Exposure to real-time messaging systems (Kafka, RabbitMQ, etc.) Experience working in mission-critical, high-availability environments What We Offer: Fully remote opportunity with flexible working hours A chance to work on high-impact, global financial systems Collaborative and technically strong team culture Ongoing learning and professional development opportunities Competitive salary and performance incentives Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Requirements Description and Requirements Position Summary The SQL Database Administrator is responsible for the design, implementation, and support of database systems for applications across the Enterprise . Database Administrator is a part of the Database end to end delivery team working and collaborating with Application Development, Infrastructure Engineering and Operation Support teams to deliver and support secured, high performing and optimized database solutions. Database Administrator specializes in the SQL database platform. Job Responsibilities Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex SQL & Sybase databases. Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and compliance. Identifies and resolves problem utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of database management; Consults and advises application development teams on database security, query optimization and performance. Writes scripts for automating DBA routine tasks and documents database maintenance processing flows per standards. Implement industry best practices while performing database administration task Work in Agile model with the understanding of Agile concepts Collaborate with development teams to provide and implement new features. Able to debug production issues by analyzing the logs directly and using tools like Splunk. Begin tackling organizational impediments Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 3+ years of IT and Infrastructure engineering work experience. Experience (In Years) 3+ Years Total IT experience & 2+ Years relevant experience in SQL Server + Sybase Database Technical Skills Database Management: Should have Basic knowledge in managing and administering SQL Server, Azure SQL Server, and Sybase databases, ensuring high availability and optimal performance. Data Infrastructure & Security: Basic knowledge in designing and implementing robust data infrastructure solutions, with a strong focus on data security and compliance. Backup & Recovery: Skilled in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Performance Tuning & Optimization: Adept at performance tuning and optimization of databases, leveraging advanced techniques to enhance system efficiency and reduce latency. Cloud Computing & Scripting: Basic knowledge in cloud computing environments and proficient in operating system scripting, enabling seamless integration and automation of database operations. Management of database elements, including creation, alteration, deletion and copying of schemas, databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints . Basic knowledge analytical skills to improve application performance. Basic knowledge of database performance Tuning, Backup & Recovery, Infrastructure as a Code and Observability tools (Elastic). Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database platforms Other Critical Requirements Automation tools and programming such as Ansible and Python are preferrable Excellent Analytical and Problem-Solving skills Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Demonstrate ability to work independently and in a team environment About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Requirements Description and Requirements Position Summary The SQL Database Administrator is responsible for the design, implementation, and support of database systems for applications across the Enterprise. Database Administrator is a part of the Database end to end delivery team working and collaborating with Application Development, Infrastructure Engineering and Operation Support teams to deliver and support secured, high performing and optimized database solutions. Database Administrator specializes in the SQL database platform. Job Responsibilities Manages design, distribution, performance, replication, security, availability, and access requirements for large and complex SQL & Sybase databases. Designs and develops physical layers of databases to support various application needs; Implements back-up, recovery, archiving, conversion strategies, and performance tuning; Manages job scheduling, application release, database change and compliance. Identifies and resolves problem utilizing structured tools and techniques. Provides technical assistance and mentoring to staff in all aspects of database management; Consults and advises application development teams on database security, query optimization and performance. Writes scripts for automating DBA routine tasks and documents database maintenance processing flows per standards. Implement industry best practices while performing database administration task Work in Agile model with the understanding of Agile concepts Collaborate with development teams to provide and implement new features. Able to debug production issues by analyzing the logs directly and using tools like Splunk. Begin tackling organizational impediments Learn new technologies based on demand and help team members by coaching and assisting. Education, Technical Skills & Other Critical Requirement Education Bachelor’s degree in computer science, Information Systems, or another related field with 7+ years of IT and Infrastructure engineering work experience. Experience (In Years) 7+ Years Total IT experience & 4+ Years relevant experience in SQL Server + Sybase Database Technical Skills Database Management: Proficient in managing and administering SQL Server, Azure SQL Server, and Sybase databases, ensuring high availability and optimal performance. Data Infrastructure & Security: Expertise in designing and implementing robust data infrastructure solutions, with a strong focus on data security and compliance. Backup & Recovery: Skilled in developing and executing comprehensive backup and recovery strategies to safeguard critical data and ensure business continuity. Performance Tuning & Optimization: Adept at performance tuning and optimization of databases, leveraging advanced techniques to enhance system efficiency and reduce latency. Cloud Computing & Scripting: Experienced in cloud computing environments and proficient in operating system scripting, enabling seamless integration and automation of database operations. Management of database elements, including creation, alteration, deletion and copying of schemas. databases, tables, views, indexes, stored procedures, triggers, and declarative integrity constraints Strong database analytical skills to improve application performance. Strong knowledge in ITSM process and tools (ServiceNow). Ability to work 24*7 rotational shift to support the Database and Splunk platforms. Other Critical Requirements Automation tools and programming such as Ansible and Python Excellent Analytical and Problem-Solving skills Experience managing geographically distributed and culturally diverse workgroups with strong team management, leadership and coaching skills Excellent written and oral communication skills, including the ability to clearly communicate/articulate technical and functional issues with conclusions and recommendations to stakeholders. Prior experience in handling state side and offshore stakeholders Experience in creating and delivering Business presentations. Demonstrate ability to work independently and in a team environment About MetLife Recognized on Fortune magazine's list of the 2024 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us! Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

What you bring to the table (Core Requirements): 5+ years of Java development within an enterprise-level domain Java 8 (11 preferred) features like lambda expressions, Stream API, CompletableFuture, etc. Skilled with low-latency, high volume application development Team will need expertise in CI/CD, and shift left testing Nice to have Golang and/or Rust Experienced with asynchronous programming, multithreading, implementing APIs, and Microservices, including Spring Boot Proficiency with SQL Experience with data sourcing, data modeling and data enrichment Experience with Systems Design & CI/CD pipelines Cloud computing, preferably AWS Solid verbal and written communication and consultant/client-facing skills are a must. As a true consultant, you are a self-starter who takes initiative. Solid experience with at least two (preferably more) of the following: Kafka (Core Concepts, Replication & Reliability, Kafka Internals, Infrastructure & Control, Data Retention and Durability) MongoDB Sonar Jenkins Oracle DB, Sybase IQ, DB2 Drools or any rules engine experience CMS tools like Adobe AEM Search tools like Algolia, ElasticSearch or Solr Spark What makes you stand out from the pack: Payments or Asset/Wealth Management experience Mature server development and knowledge of frameworks, preferably Spring Enterprise experience working and building enterprise products, long term tenure at enterprise-level organizations, experience working with a remote team, and being an avid practitioner in their craft You have pushed code into production and have deployed multiple products to market, but are missing the visibility of a small team within a large enterprise technology environment. You enjoy coaching junior engineers, but want to remain hands-on with code. Open to work hybrid - 3 days per week from office Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Propal AI is building the next generation of human-like voice agents for real-time conversations using LLMs, speech models, and intelligent backend workflows. As a backend intern, you'll work closely with our core engineering team to contribute to the development of scalable APIs, AI agent infrastructure, and LLM-based systems. This internship is ideal for someone passionate about large language models, APIs, vector databases, and real-world applications of AI. Duration: 3–6 months Ideal for: Final-year students or recent graduates in CS/IT or related fields About the Internship: What You’ll Work On Assist in building RESTful APIs using Python (FastAPI preferred) Develop and test components of our Retrieval-Augmented Generation (RAG) pipelines Write clean, modular code for agent workflows and backend systems Work with vector databases (FAISS/Pinecone/Weaviate) for knowledge search Integrate STT, LLM, and TTS APIs into backend pipelines Optimize the performance of data fetching, token usage, and latency in LLM queries Collaborate in sprint planning, standups, and Git-based workflows What We’re Looking For Solid Python programming skills and knowledge of OOP Familiarity with FastAPI, Flask, or Django Understanding of how APIs work (GET, POST, status codes, JSON) Basic knowledge of databases (SQL or NoSQL) Curiosity about LLMs, LangChain, or RAG-based systems Git and version control fundamentals Bonus Points For Past projects or internships in AI, NLP, or backend development Exposure to LangChain, OpenAI, HuggingFace, or vector DBs Working knowledge of tools like Docker or Postman Building bots, tools, or backend services independently You’ll Thrive Here If You Love experimenting with AI and building real-world systems Want to go beyond theory and contribute to production-grade code Are self-driven, proactive, and eager to learn in a fast-paced environment Communicate ideas clearly and seek continuous feedback What You Get Mentorship from a world-class AI + engineering team Stipend + Certificate + PPO opportunity for high performers Hands-on experience with state-of-the-art AI tools and frameworks Impactful work on a live product used by businesses Flexible, remote-friendly culture Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Design, develop, and deploy NLP systems using advanced LLM architectures (e.g., GPT, BERT, LLaMA, Mistral) tailored for real-world applications such as chatbots, document summarization, Q&A systems, and more. Implement and optimize RAG pipelines, combining LLMs with vector search engines (e.g., FAISS, Weaviate, Pinecone) to create context-aware, knowledge-grounded responses. Integrate external knowledge sources, including databases, APIs, and document repositories, to enrich language models with real-time or domain-specific information. Fine-tune and evaluate pre-trained LLMs, leveraging techniques like prompt engineering, LoRA, PEFT, and transfer learning to customize model behavior. Collaborate with data engineers and MLOps teams to ensure scalable deployment and monitoring of AI services in cloud environments (e.g., AWS, GCP, Azure). Build robust APIs and backend services to serve NLP/RAG models efficiently and securely. Conduct rigorous performance evaluation and model validation, including accuracy, latency, bias/fairness, and explainability (XAI). Stay current with advancements in AI research, particularly in generative AI, retrieval systems, prompt tuning, and hybrid modeling strategies. Participate in code reviews, documentation, and cross-functional team planning to ensure clean and maintainable code. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value. Job Description Publicis Sapient is looking for Manager / Specialist Technology (Java/Microservices) to join our team of bright thinkers and doers. You’ll use your problem-solving creativity to design, architect, and develop high-end technology solutions that solve our clients’ most complex and challenging problems across different industries. We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions. Your Impact: A hands-on solution architect who has delivered at least 3-4 large-scale projects from ground zero and holding experience in building large-scale, high-volume, low latency, high availability, and complex distributed services. Qualifications: Your Experience & Skills: Experience: 10- 14 years Proposal and engineering Initiatives Worked on various client-specific proposals Manage and grow client accounts Managed a large sized team Architecture and Design Ability to identify, showcase potential solutions, and recommend the best solution based on requirements. Manage stakeholders to drive key decisions on tools, technologies, user journeys, and overall governance Experience in object-oriented, SOLID, and DRY principles, reactive programming models, Microservices, and event-driven solutions Delivered solutions on alternative architecture patterns to meet business requirement Understands enterprise security, compliance, and data security at the network and Application layer Language & frameworks and Database Worked extensively on Java language Java 8 and above, having used concurrency, multithreaded models, blocking/non-blocking IO, lambdas, streams, generics, advanced libraries, algorithms, and data structures. Executed database DDL, DML, modeling, managed transactional scenarios & Isolation levels, and experience with NoSQL and SQL-based DBs in the past. Extensively used Springboot/ Spring cloud or similar frameworks to deliver a complex scalable solution Worked extensively on API-based digital journeys and enabled DBT and alternatives technologies to achieve the desired outcomes Tools Used build and Automation tools, Code Quality Plugins, CI/CD Pipelines, and Containerization platforms (Docker/Kubernetes) Used logging and Monitoring solutions like Splunk, ELK, Grafana, etc., and implement technical KPIs. Extensively used application profiling tools like profiler, Yourkit, Visual VM, etc. Platforms & Cloud Services Successfully delivered solutions using one of the cloud platforms e.g. AWS/GCP/Azure/ PCF Integrated with messaging platforms e.g. RabbitMQ/ Kafka/ cloud messaging/ enterprise messaging Applied distributed caching solutions like Redis, Memcache, etc. Testing & Performance engineering Memory management, GC, and GC tuning. Writing JUnit test cases, mocking e.g. Mockito, PowerMockito, EasyMock, etc. BDD automation tools like Cucumber, JBehave, etc. Execute performance and security tests addressing non-functional requirements. Education : Bachelor’s/Master’s Degree in Computer Engineering, Computer Science, or a related field Set Yourself Apart With: Any Cloud certification, Modern Technology exposure (AI/ML, IoT, Blockchain, etc.) A Tip from the Hiring Manager: We at Publicis Sapient, enable our clients to Thrive in Next and to create business value through expert strategies, customer-centric experience design, and world class product engineering. The future of business is disruptive; transformative; and becoming digital to the core. We seek the (Passionate Technologists) who are Deeply skilled, bold, collaborative, flexible Reimagine the way the world works to help businesses improve the daily lives of people, and the work Additional Information Gender-Neutral Policy 18 paid holidays throughout the year. Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Show more Show less

Posted 2 weeks ago

Apply

3.0 - 4.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

We are looking for a skilled Node.js Developer to join our development team. The ideal candidate should have a solid understanding of backend development, RESTful APIs, and server-side logic. Experience required- 3-4 years Location: Mansarovar, Jaipur Job Type: Full-Time | On-site Key Responsibilities: Develop and maintain scalable and high-performance applications using Node.js. Design and build RESTful APIs and microservices. Integrate third-party APIs and services. Collaborate with front-end developers, DevOps, and product teams for end-to-end development. Troubleshoot, debug, and upgrade existing systems. Optimize application performance and scalability. Stay updated with emerging technologies and best practices in Node.js ecosystems. Writing reusable, testable, and efficient code Design and implementation of low-latency, high-availability, and performant applications. Implementation of security and data protection. Requirements: Bachelor’s degree in computer science, Engineering, or a related field. 3-4 years of experience in backend development with strong proficiency in Node.js . Experience with databases like MongoDB, PostgreSQL, or MySQL. Knowledge of asynchronous programming and event-driven architectures. Experience working with RESTful APIs, authentication methods (OAuth2, JWT), and WebSockets. Familiarity with version control systems like Git. Strong debugging and performance tuning skills. Good understanding of data structures, algorithms, and software design principles. Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Summary Description Summary of This Role Responsible for availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning. Creates a bridge between development and operations by applying a software engineering mindset to system administration topics. Splits time between operations/on-call duties and developing systems and software that help increase site reliability and performance. What Part Will You Play? Chaos engineering - you’re expected to think laterally about how our systems might fail in theory, design tests to demonstrate how they behave in practice, and then formulate and implement remediation plans, as appropriate. Pushing our systems to their limits, and then coming up with designs for how to get them to the next performance tier. Use practices from DevOps and GitOps to improve automation and processes to make self service possible. Safeguarding reliability. Ensuring that our services are highly available, resilient against disasters, self-monitoring, and self-healing. Running “game days” to test assumptions about reliability and learn what will break before it matters to customers. Reviewing designs with an eye toward increasing the holistic stability of our platform and identifying potential risks. Building systems to proactively monitor the health, performance and security of our production and non-production virtualized infrastructure. Improving our monitoring and alerting systems to make sure engineers get paged when it matters (and don’t get paged when it doesn’t). Troubleshooting systems and network issues, alongside our Technical Operations Team. Evolving our SDLC, practices, and tooling to account for Site Reliability considerations and best practices. Developing runbooks and improving documentation. What Are We Looking For in This Role? Minimum Qualifications BS in Computer Science, Information Technology, Business / Management Information Systems or related field Typically minimum of 2 years relevant experience Preferred Qualifications Nothing provided What Are Our Desired Skills and Capabilities? Skills / Knowledge - Developing professional expertise, applies company policies and procedures to resolve a variety of issues. Job Complexity - Works on problems of moderate scope where analysis of situations or data requires a review of a variety of factors. Exercises judgment within defined procedures and practices to determine appropriate action. Builds productive internal/external working relationships. Supervision - Normally receives general instructions on routine work, detailed instructions on new projects or assignments. Experience in Public and Private Clouds, Jenkins, Terraform, Ansible, OpenShift, Kubernetes or AWS EKS Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. About The Team Come be a part of something big. If you want to be a part of building something big that will drive value throughout the entire global organization, then this is the opportunity for you. You will be working on top priority initiatives that span new and existing technologies - all to deliver outstanding results and experiences for our customers and employees. The Enterprise Data Services organization in Business Technology takes pride in enabling data driven business outcomes to spearhead Workday’s growth through trusted data excellence, innovation and architecture thought leadership. Our organization is responsible for developing and supporting Data Warehousing, Data Ingestion and Integration Services, Master Data Management (MDM), Data Quality Assurance, and the deployment of cutting-edge Advanced Analytics and Machine Learning solutions tailored to enhance multiple business sectors such as Sales, Marketing, Services, Support, and Customer Engagement. Our team harnesses the power of top-tier modern cloud platforms and services, including AWS, Databricks, Snowflake, Reltio, Tableau, Snaplogic, and MongoDB, complemented by a suite of AWS-native technologies like Spark, Airflow, Redshift, Sagemaker, and Kafka. These tools are pivotal in our drive to create robust data ecosystems that empower our business operations with precision and scalability. EDS is a global team distributed across the U.S, India and Canada. About The Role Join a pioneering organization at the forefront of technological advancement, dedicated to demonstrating data-driven insights to transform industries and drive innovation. We are actively seeking a skilled Data Platform and Support Engineer who will play a pivotal role in ensuring the smooth functioning of our data infrastructure, enabling self-service analytics, and empowering analytical teams across the organization. As a Data Platform and Support Engineer, you will oversee the management of our enterprise data hub, working alongside a team of dedicated data and software engineers to build and maintain a robust data ecosystem that drives decision-making at scale for internal analytical applications. You will play a key role in ensuring the availability, reliability, and performance of our data infrastructure and systems. You will be responsible for monitoring, maintaining, and optimizing data systems, providing technical support, and implementing proactive measures to enhance data quality and integrity. This role requires advanced technical expertise, problem-solving skills, and a strong commitment to delivering high-quality support services. The team is responsible for supporting Data Services, Data Warehouse, Analytics, Data Quality and Advanced Analytics/ML for multiple business functions including Sales, Marketing, Services, Support and Customer Experience. We demonstrate leading modern cloud platforms like AWS, Reltio, Snowflake,Tableau, Snaplogic, MongoDB in addition to the native AWS technologies like Spark, Airflow, Redshift, Sagemaker and Kafka. Job Responsibilities : Monitor the health and performance of data systems, including databases, data warehouses, and data lakes. Conduct root cause analysis and implement corrective actions to prevent recurrence of issues. Manage and optimize data infrastructure components such as servers, storage systems, and cloud services. Develop and implement data quality checks, validation rules, and data cleansing procedures. Implement security controls and compliance measures to protect sensitive data and ensure regulatory compliance. Design and implement data backup and recovery strategies to safeguard data against loss or corruption. Optimize the performance of data systems and processes by tuning queries, optimizing storage, and improving ETL pipeline efficiency. Maintain comprehensive documentation, runbooks, and fix guides for data systems and processes. Collaborate with multi-functional teams, including data engineers, data scientists, business analysts, and IT operations. Lead or participate in data-related projects, such as system migrations, upgrades, or expansions. Deliver training and mentorship to junior team members, sharing knowledge and standard methodologies to support their professional development. Participate in rotational shifts, including on-call rotations and coverage during weekends and holidays as required, to provide 24/7 support for data systems, responding to and resolving data-related incidents in a timely manner Hands-on experience with source version control, continuous integration and experience with release/organizational change delivery tools. About You Basic Qualifications: 6+ years of experience designing and building scalable and robust data pipelines to enable data-driven decisions for the business. BE/Masters in computer science or equivalent is required Other Qualifications: Prior experience with CRM systems (e.g. Salesforce) is desirable Experience building analytical solutions to Sales and Marketing teams. Should have experience working on Snowflake ,Fivetran DBT and Airflow Experience with very large-scale data warehouse and data engineering projects. Experience developing low latency data processing solutions like AWS Kinesis, Kafka, Spark Stream processing. Should be proficient in writing advanced SQLs, Expertise in performance tuning of SQLs Experience working with AWS data technologies like S3, EMR, Lambda, DynamoDB, Redshift etc. Solid experience in one or more programming languages for processing of large data sets, such as Python, Scala. Ability to create data models, STAR schemas for data consuming. Extensive experience in troubleshooting data issues, analyzing end to end data pipelines and working with users in resolving issues Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process! , Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: AI Engineer Location: Gurgaon (On-site) Type: Full-Time Experience: 2–6 Years Role Overview We are seeking a hands-on AI Engineer to architect and deploy production-grade AI systems that power our real-time voice intelligence suite. You will lead AI model development, optimize low-latency inference pipelines, and integrate GenAI, ASR, and RAG systems into scalable platforms. This role combines deep technical expertise with team leadership and a strong product mindset. Key Responsibilities Build and deploy ASR models (e.g., Whisper, Wav2Vec2.0) and diarization systems for multi-lingual, real-time environments. Design and optimize GenAI pipelines using OpenAI, Gemini, LLaMA, and RAG frameworks (LangChain, LlamaIndex). Architect and implement vector database systems (FAISS, Pinecone, Weaviate) for knowledge retrieval and indexing. Fine-tune LLMs using SFT, LoRA, RLHF, and craft effective prompt strategies for summarization and recommendation tasks. Lead AI engineering team members and collaborate cross-functionally to ship robust, high-performance systems at scale. Preferred Qualification 2–6 years of experience in AI/ML, with demonstrated deployment of NLP, GenAI, or STT models in production. Proficiency in Python, PyTorch/TensorFlow, and real-time architectures (WebSockets, Kafka). Strong grasp of transformer models, MLOps, and low-latency pipeline optimization. Bachelor’s/Master’s in CS, AI/ML, or related field from a reputed institution (IITs, BITS, IIITs, or equivalent). What We Offer Compensation: Competitive salary + equity + performance bonuses Ownership: Lead impactful AI modules across voice, NLP, and GenAI Growth: Work with top-tier mentors, advanced compute resources, and real-world scaling challenges Culture: High-trust, high-speed, outcome-driven startup environment Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Join our forward-thinking team as a Linux Engineer with a strong foundation in Python, where you will play a pivotal role in automating and optimizing our Linux server infrastructure. At IMC, the Linux Engineering team is at the heart of our operations, managing the provisioning, configuration, and ongoing performance of an extensive and mission-critical Linux server fleet. In this role, you will leverage cutting-edge automation and self-service tools to ensure our servers are not only stable and reliable but also scalable to meet the demands of a rapidly evolving industry. Your innovative approach and commitment to continuous improvement will help drive us to remain leaders in the field, integrating the latest technologies and methodologies to maintain our competitive edge. Your Core Responsibilities: Use state-of-the-art tools and methods to troubleshoot and resolve complex issues on enterprise Linux systems, ensuring the stability and functionality of our key trading and development systems Enhance and support configuration management code and automated processes that operate on 7500+ critical Linux systems in a near 24/7 High-Frequency Trading (HFT), Ultra Low Latency environment Apply your Python expertise to design, develop, and support processes that manage and maintain critical Linux systems at scale in a diverse and technically complex environment Improve and support existing programs and processes that provision bare-metal servers, transforming them from a blank-slate to fully functioning Linux trading and development platforms Support and enhance our metrics and log collection infrastructure, as well as our core monitoring and alerting tools, ensuring robust system visibility Consistently communicate status updates, ideas, and strategies with peers and stakeholders through various channels including chats, face-to-face interactions, issue tracking tickets, clear commit messages, and well-documented merge requests Your Skills and Experience: Bachelor’s Degree in Computer Engineering or similar field of study 5+ years of experience in Linux engineering, debugging, administration, and OS system provisioning (PXE/DHCP/TFTP/Grub) Extensive experience with configuration management at scale, preferably with Puppet and Hiera Experience in Docker image building, modification, and publishing Hands-on experience with Kubernetes Advanced skills in Python for automation, API programming, design, unit testing, and debugging Proven experience in designing Ansible tasks and playbooks, as well as utilizing Ansible Tower Expertise in RPM design, build, publishing, and repository management Familiarity with CI/CD pipelines, version control systems (git), branching and merging best practices Proficiency in a range of system/network tools and services including EBPF, tcpdump, strace, nmcli (Network Manager), systemd, ntp/ptp, lsof, nc, nmap and NFS/S3 storage Proficiency with networking fundamentals including DNS, TCP/UDP/multicast etc. Experience with monitoring tools such as Prometheus/Grafana, Alert Manager, Alerta and OpsGenie About Us IMC is a leading trading firm, known worldwide for our advanced, low-latency technology and world-class execution capabilities. Over the past 30 years, we’ve been a stabilizing force in the financial markets – providing the essential liquidity our counterparties depend on. Across offices in the US, Europe, and Asia Pacific, our talented employees are united by our entrepreneurial spirit, exceptional culture, and commitment to giving back. It's a strong foundation that allows us to grow and add new capabilities, year after year. From entering dynamic new markets, to developing a state-of-the-art research environment and diversifying our trading strategies, we dare to imagine what could be and work together to make it happen. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Gruve Gruve is an innovative software services startup dedicated to transforming enterprises to AI powerhouses. We specialize in cybersecurity, customer experience, cloud infrastructure, and advanced technologies such as Large Language Models (LLMs). Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks. About The Role We are seeking a Level 2 Network Engineer with expertise in Cisco Network Analytics, BAU Operations, and Device Management. The ideal candidate will be responsible for monitoring, troubleshooting, and optimizing Cisco network infrastructure, ensuring high availability, performance, and compliance with operational standards. He should be well versed in the Change and Incident Management process. He should be ready to work in 24 X 7 shift. Key Responsibilities Network Monitoring & Incident Management: Monitor Cisco network infrastructure using tools such as Cisco DNA Center, Cisco Prime, ThousandEyes, NetFlow, or SolarWinds. Analyze network performance metrics, detect anomalies, and proactively address potential issues. Respond to and resolve network incidents, escalating complex issues to L3 teams when necessary. Perform Root Cause Analysis (RCA) for recurring network issues and implement corrective actions. BAU Operations & Device Management Perform routine health checks, firmware updates, and configuration backups of network devices. Manage Cisco routers, switches, and wireless controllers, ensuring optimal configuration and uptime. Support device provisioning, decommissioning, and lifecycle management. Maintain inventory records for network devices and ensure compliance with security policies. Network Troubleshooting & Optimization Diagnose and resolve network latency, packet loss, and connectivity issues. Troubleshoot routing protocols (BGP, OSPF, EIGRP) and switching issues (VLANs, STP, HSRP, VRRP). Work on QoS, ACLs, NAT, DHCP, and network segmentation to improve performance. Assist in capacity planning and network performance tuning. Change Management & Documentation Implement network changes as per approved Change Management Process (ITIL framework). Document network configurations, changes, and troubleshooting procedures. Create and maintain standard operating procedures (SOPs) for network operations. Security & Compliance Ensure compliance with network security best practices, including access controls and segmentation. Work with security teams to monitor for vulnerabilities and apply necessary patches. Assist in firewall policy reviews and VPN configurations, if required. Basic Qualifications 2–5 years of experience in Cisco networking and BAU operations. Hands-on experience with Cisco DNA Center, Cisco Prime Infrastructure, or Cisco ThousandEyes. Strong knowledge of routing & switching protocols (BGP, OSPF, EIGRP, VLANs, STP, HSRP, VRRP). Familiarity with network monitoring tools and log analysis for troubleshooting. Basic knowledge of firewall integrations, VPNs, and network security policies. Preferred Qualifications ITIL Foundation Desired certification from either of the following CCNP Enterprise or CCNP Security (preferred). Has a hands-on experience working multiple clients and deployments for the BAU operations and support‍ Why Gruve At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you. Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted. Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

At IMC our team of Business Partners play an important and strategic role in the development of our employees and leaders. We’re seeking confident, driven, collaborative and effervescent Senior HR Business Partner to join our progressive and growing HR team. As a successful Senior HRBP you will partner with people leaders to drive high performance, employee engagement and continuous improvement. With fast growing headcount across India, our BPs are tasked with ensuring that we consistently deliver an exceptional working environment and employee experience. We offer genuine Business Partnering roles where success is built on effective 1:1s with managers and employees, through the coaching of individuals and utilization of insights to drive organizational change. In addition to hands on business partnering, the roles are also responsible for the design and delivery of a number of cyclical and ad hoc HR projects. At IMC we employ a broad range of people with varying backgrounds. What they have in common is their superior technical expertise, their extraordinary smarts, their collaborative approach and our HR team pride themselves on having the same mindset. What you will do: Partner with department heads to deliver on strategic HR initiatives. Consult with leaders and execute people-related strategies around performance management, employee relationships, coaching, organisational development, workforce planning and learning & development. Provide 1:1 coaching to develop the leadership capability of our team leads and managers. Drive consistency in employee experience and engagement. Design and execute L&D and leadership development programs, and facilitate local training. Participate in final round interviews to ensure the suitability of new talent into our teams. Own and execute on cyclical projects, and be involved in/ own the design and execution of HR projects to continue to drive HR’s impact across the organization, including such areas as performance reviews, career conversations, employee engagement etc. Manage (limited) HR operations including the termination processes of department employees (with the support from the HR coordinator) Who you are: A Bachelor’s degree or higher level of education in Human Resources, Business or related field. 8+ years’ experience as a HR Manager or HR Business Partnering in a progressive, high touch, fast paced environment. Executive Coaching experience desired but not essential Preparedness to participate in frequent 1:1 meeting, maintaining high levels of engagement, energy and motivation in a high touch HR environment. Proven experience in proactively coaching leaders and providing guidance on people management issues and strategies. Ability to build credibility with senior leaders as an advisor who provides quality advice and follows through on commitments. Superior communication skills, able to convey confidence, empathy and trust via candor and diplomacy to build effective relationships. Possesses an abundance of common sense and pragmatism to solve problems and get results. Be part of a fun and energetic team that are results oriented, self-motivated and driven About Us IMC is a leading trading firm, known worldwide for our advanced, low-latency technology and world-class execution capabilities. Over the past 30 years, we’ve been a stabilizing force in the financial markets – providing the essential liquidity our counterparties depend on. Across offices in the US, Europe, and Asia Pacific, our talented employees are united by our entrepreneurial spirit, exceptional culture, and commitment to giving back. It's a strong foundation that allows us to grow and add new capabilities, year after year. From entering dynamic new markets, to developing a state-of-the-art research environment and diversifying our trading strategies, we dare to imagine what could be and work together to make it happen. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

INSTRUMENTAL SERVICES INC DBA INRHYTHM Pune, Maharashtra, India Who Is InRhythm? InRhythm (Home - InRhythm ) is a leading modern product consultancy and digital innovation firm with a mission to make a dent in the digital economy. Founded in 2002, InRhythm is currently engaged by Fortune 50 enterprises to bring their next generation of digital products and platforms to market. InRhythm has helped hundreds of teams launch mission-critical products that have created a positive impact worth billions of dollars. InRhythm’s unique capabilities of Product Innovation and Platform Modernization services are the most sought-after. The InRhythm team of A+ thought leaders don’t just “get an assignment,” they join the company to do what they love. It’s that passion that has helped us grow rapidly and consistently deliver on our commitment to helping clients develop better, faster, and in rhythm. What We Do At InRhythm We bring enterprises' most urgent, important products to market with high-velocity, high- quality and 10x impact. We enable innovative cultures by coaching teams with the right mix and maturity of modern tools, methods, and thought leadership. This is a unique opportunity to get in on the ground floor of an evolving team. InRhythm clients include a broad range of highly visible and recognizable customers, including, but not limited to: Goldman Sachs Fidelity Morgan Stanley Mastercard From greenfield to tier-one builds, our clients look to us to deliver their mission-critical projects related to product strategy, design, cloud native applications, as well as mobile and web development. The projects we work on literally change the world. They change the way we live, work, and think in a positive way. We are looking for a Senior Java Engineer: As a Senior Java Engineer, you will work with lead-level and fellow senior-level engineers to architect and implement solutions that enable customers to get the most out of what the client can offer. In this role, you will develop performant and robust Java applications while supplying the continued evaluation and advancement of web technologies in the organization. At InRhythm, you will: Work on a high-velocity scrum team Work with clients to come up with solutions to real-world problems Architect and implement scalable end-to-end Web applications Help team lead facilitate development processes Provide estimates and milestones for features/stories Work with your mentor to learn and grow and mentor less experienced engineers Contribute to the growth of InRhythm via interviewing and architecting What you bring to the table (Core Requirements): 5+ years of Java development within an enterprise-level domain Java 8 (11 preferred) features like lambda expressions, Stream API, CompletableFuture, etc. Skilled with low-latency, high volume application development Team will need expertise in CI/CD, and shift left testing Nice to have Golang and/or Rust Experienced with asynchronous programming, multithreading, implementing APIs, and Microservices, including Spring Boot Proficiency with SQL Experience with data sourcing, data modeling and data enrichment Experience with Systems Design & CI/CD pipelines Cloud computing, preferably AWS Solid verbal and written communication and consultant/client-facing skills are a must. As a true consultant, you are a self-starter who takes initiative. Solid experience with at least two (preferably more) of the following: Kafka (Core Concepts, Replication & Reliability, Kafka Internals, Infrastructure & Control, Data Retention and Durability) MongoDB Sonar Jenkins Oracle DB, Sybase IQ, DB2 Drools or any rules engine experience CMS tools like Adobe AEM Search tools like Algolia, ElasticSearch or Solr Spark What makes you stand out from the pack: Payments or Asset/Wealth Management experience Mature server development and knowledge of frameworks, preferably Spring Enterprise experience working and building enterprise products, long term tenure at enterprise-level organizations, experience working with a remote team, and being an avid practitioner in their craft You have pushed code into production and have deployed multiple products to market, but are missing the visibility of a small team within a large enterprise technology environment. You enjoy coaching junior engineers, but want to remain hands-on with code. Open to work hybrid - 3 days per week from office Why Be an InRhythmer? People at InRhythm are entrepreneurs and innovators at heart and problem solvers who find new ways to overcome challenges. InRhythm continues to evolve and grow – and is now prepared to accelerate “scale” with the addition of this role to our community. We’ve been named an Inc. 5000 Hall of Fame Fastest Growing Company for 9 years, Deloitte Fast 500 company for 5 years, and Consulting Magazine Fastest Growing Company winner several years in a row. If you’re looking forward to working with awesome colleagues in a high- growth environment and tight-knit community, we’re looking forward to hearing from you. Client Description InRhythm is one of the fastest-growing Product Engineering Consultant Agencies in NYC, with a mission to drive growth and innovation. We have been recognized on the Inc. 5000 list of the Fastest Growing Companies in America for 8 years in a row and the Inc. 5000 Hall of Fame, an honor granted to a select 1% of the high-growth companies on the list. We've also been on Deloitte's Technology Fast 500? for 4 consecutive years. We are the thought leaders on how modern software should be developed using the best open-source technologies, proven design patterns and the best tested Agile and Lean methodologies. We accelerate time to market with reduced costs and improved quality. We have built and continue to build successful solutions for our clients. Our goal is to be successful by making our clients 10x for their high-priority projects and to have fun in the process. Our business has seen tremendous growth over the past few years thanks to the thought leadership we offer to our clients. Our pods of experts can rapidly deliver software products using the latest and best advancements in 10x tools and Agile thinking. We provide technical and management consulting, in-house product development, and training and coaching, all customized to meet the specialized needs of each client. Our key InRhythmer traits are drive, ownership, positivity, and communication. Our methodologies, technologies, and people enable ultra-efficient innovation in our core practices areas. Mandatory: Core Java, SOLID Principles, Multithreading, Design patterns Spring, Spring Boot, Rest API, Microservices Kafka, Messaging/ streaming stack Junit Code Optimization, Performance Design, Architecture concepts Database and SQL CI/CD-Understanding of Deployment, Infrastructure, Cloud The candidate should have worked on at least 1 Fintech domain project ************ No gaps in organization No job hoppers(candidate must have good stability) Joining time/notice period: Immediate to 30 days Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job description 🚀 Job Title: AI Engineer Company : Darwix AI Location : Gurgaon (On-site) Type : Full-Time Experience : 2-6 Years Level : Senior Level 🌐 About Darwix AI Darwix AI is one of India’s fastest-growing GenAI startups, revolutionizing the future of enterprise sales and customer engagement with real-time conversational intelligence. We are building a GenAI-powered agent-assist and pitch intelligence suite that captures, analyzes, and enhances every customer interaction—across voice, video, and chat—in real time. We serve leading enterprise clients across India, the UAE, and Southeast Asia and are backed by global VCs, top operators from Google, Salesforce, and McKinsey, and CXOs from the industry. This is your opportunity to join a high-caliber founding tech team solving frontier problems in real-time voice AI, multilingual transcription, retrieval-augmented generation (RAG), and fine-tuned LLMs at scale. 🧠 Role Overview As the AI Engineer , you will drive the development, deployment, and optimization of AI systems that power Darwix AI's real-time conversation intelligence platform. This includes voice-to-text transcription, speaker diarization, GenAI summarization, prompt engineering, knowledge retrieval, and real-time nudge delivery. You will lead a team of AI engineers and work closely with product managers, software architects, and data teams to ensure technical excellence, scalable architecture, and rapid iteration cycles. This is a high-ownership, hands-on leadership role where you will code, architect, and lead simultaneously. 🔧 Key Responsibilities 1. AI Architecture & Model Development Architect end-to-end AI pipelines for transcription, real-time inference, LLM integration, and vector-based retrieval. Build, fine-tune, and deploy STT models (Whisper, Wav2Vec2.0) and diarization systems for speaker separation. Implement GenAI pipelines using OpenAI, Gemini, LLaMA, Mistral, and other LLM APIs or open-source models. 2. Real-Time Voice AI System Development Design low-latency pipelines for capturing and processing audio in real-time across multi-lingual environments. Work on WebSocket-based bi-directional audio streaming, chunked inference, and result caching. Develop asynchronous, event-driven architectures for voice processing and decision-making. 3. RAG & Knowledge Graph Pipelines Create retrieval-augmented generation (RAG) systems that pull from structured and unstructured knowledge bases. Build vector DB architectures (e.g., FAISS, Pinecone, Weaviate) and connect to LangChain/LlamaIndex workflows. Own chunking, indexing, and embedding strategies (OpenAI, Cohere, Hugging Face embeddings). 4. Fine-Tuning & Prompt Engineering Fine-tune LLMs and foundational models using RLHF, SFT, PEFT (e.g., LoRA) as needed. Optimize prompts for summarization, categorization, tone analysis, objection handling, etc. Perform few-shot and zero-shot evaluations for quality benchmarking. 5. Pipeline Optimization & MLOps Ensure high availability and robustness of AI pipelines using CI/CD tools, Docker, Kubernetes, and GitHub Actions. Work with data engineering to streamline data ingestion, labeling, augmentation, and evaluation. Build internal tools to benchmark latency, accuracy, and relevance for production-grade AI features. 6. Team Leadership & Cross-Functional Collaboration Lead, mentor, and grow a high-performing AI engineering team. Collaborate with backend, frontend, and product teams to build scalable production systems. Participate in architectural and design decisions across AI, backend, and data workflows. 🛠️ Key Technologies & Tools Languages & Frameworks : Python, FastAPI, Flask, LangChain, PyTorch, TensorFlow, HuggingFace Transformers Voice & Audio : Whisper, Wav2Vec2.0, DeepSpeech, pyannote.audio, AssemblyAI, Kaldi, Mozilla TTS Vector DBs & RAG : FAISS, Pinecone, Weaviate, ChromaDB, LlamaIndex, LangGraph LLMs & GenAI APIs : OpenAI GPT-4/3.5, Gemini, Claude, Mistral, Meta LLaMA 2/3 DevOps & Deployment : Docker, GitHub Actions, CI/CD, Redis, Kafka, Kubernetes, AWS (EC2, Lambda, S3) Databases : MongoDB, Postgres, MySQL, Pinecone, TimescaleDB Monitoring & Logging : Prometheus, Grafana, Sentry, Elastic Stack (ELK) 🎯 Requirements & Qualifications 👨‍💻 Experience 2-6 years of experience in building and deploying AI/ML systems, with at least 2+ years in NLP or voice technologies. Proven track record of production deployment of ASR, STT, NLP, or GenAI models. Hands-on experience building systems involving vector databases, real-time pipelines, or LLM integrations. 📚 Educational Background Bachelor's or Master's in Computer Science, Artificial Intelligence, Machine Learning, or a related field. Tier 1 institute preferred (IITs, BITS, IIITs, NITs, or global top 100 universities). ⚙️ Technical Skills Strong coding experience in Python and familiarity with FastAPI/Django. Understanding of distributed architectures, memory management, and latency optimization. Familiarity with transformer-based model architectures, training techniques, and data pipeline design. 💡 Bonus Experience Worked on multilingual speech recognition and translation. Experience deploying AI models on edge devices or browsers. Built or contributed to open-source ML/NLP projects. Published papers or patents in voice, NLP, or deep learning domains. 🚀 What Success Looks Like in 6 Months Lead the deployment of a real-time STT + diarization system for at least 1 enterprise client. Deliver high-accuracy nudge generation pipeline using RAG and summarization models. Build an in-house knowledge indexing + vector DB framework integrated into the product. Mentor 2–3 AI engineers and own execution across multiple modules. Achieve <1 sec latency on real-time voice-to-nudge pipeline from capture to recommendation. 💼 What We Offer Compensation : Competitive fixed salary + equity + performance-based bonuses Impact : Ownership of key AI modules powering thousands of live enterprise conversations Learning : Access to high-compute GPUs, API credits, research tools, and conference sponsorships Culture : High-trust, outcome-first environment that celebrates execution and learning Mentorship : Work directly with founders, ex-Microsoft, IIT-IIM-BITS alums, and top AI engineers Scale : Opportunity to scale an AI product from 10 clients to 100+ globally within 12 months ⚠️ This Role is NOT for Everyone 🚫 If you're looking for a slow, abstract research role—this is NOT for you. 🚫 If you're used to months of ideation before shipping—you won't enjoy our speed. 🚫 If you're not comfortable being hands-on and diving into scrappy builds—you may struggle. ✅ But if you’re a builder , architect , and visionary —who loves solving hard technical problems and delivering real-time AI at scale, we want to talk to you. 📩 How to Apply Send your CV, GitHub/portfolio, and a brief note on “Why AI at Darwix?” to: 📧 careers@cur8.in Subject Line: Application – AI Engineer – [Your Name] Include links to: Any relevant open-source contributions LLM/STT models you've fine-tuned or deployed RAG pipelines you've worked on 🔍 Final Thought This is not just a job. This is your opportunity to build the world’s most scalable AI sales intelligence platform —from India, for the world. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Sprinklr is a leading enterprise software company for all customer-facing functions. With advanced AI, Sprinklr's unified customer experience management (Unified-CXM) platform helps companies deliver human experiences to every customer, every time, across any modern channel. Headquartered in New York City with employees around the world, Sprinklr works with more than 1,000 of the world’s most valuable enterprises — global brands like Microsoft, P&G, Samsung and more than 50% of the Fortune 100. Learn more about our culture and how we make our employees happier through The Sprinklr Way . Job Description Own and manage customer implementation, ensuring successful CCaaS deployment. Perform end-to-end Network and IP Telephony assessments, identifying bottlenecks and areas for improvement. Analyze Signaling and Media streams, execute Network diagnostics to troubleshoot Voice quality issues. Work closely with CPaaS vendors and Carrier partners to optimize voice connectivity. Act as the primary technical contact for customers during UAT and post-Go-Live. Develop best practice guidelines for customers on network configurations and VoIP optimizations. Assist with customer training on voice quality monitoring tools and configurations. Qualifications & Skills: 5+ years of experience in VoIP, Telecom, and Networking domains. Strong knowledge of VoIP Protocols (SIP/SDP, RTP/RTCP), Networking fundamentals ( UDP/TCP/IP, DNS, MPLS), QoS (latency, jitter, packet loss mitigation) and CCaaS platforms (e.g., Genesys, Five9, Amazon Connect). Hands-on experience with Session Border Controller (SBC), Media Servers and WebRTC. Hands-on experience with Network topologies, VPNs, IPSec, Firewalls, and NAT traversal for VoIP. Ability to collaborate with CPaaS vendors, Carrier partners, and Engineering teams. Strong troubleshooting skills, with experience using network monitoring and debugging tools. Why You'll Love Sprinklr: We're committed to creating a culture where you feel like you belong, are happier today than you were yesterday, and your contributions matter. At Sprinklr, we passionately, genuinely care. For full-time employees, we provide a range of comprehensive health plans, leading well-being programs, and financial protection for you and your family through a range of global and localized plans throughout the world. For more information on Sprinklr Benefits around the world, head to https://sprinklrbenefits.com/ to browse our country-specific benefits guides. We focus on our mission: We founded Sprinklr with one mission: to enable every organization on the planet to make their customers happier. Our vision is to be the world’s most loved enterprise software company, ever. We believe in our product: Sprinklr was built from the ground up to enable a brand’s digital transformation. Its platform provides every customer-facing team with the ability to reach, engage, and listen to customers around the world. At Sprinklr, we have many of the world's largest brands as our clients, and our employees have the opportunity to work closely alongside them. We invest in our people: At Sprinklr, we believe every human has the potential to be amazing. We empower each Sprinklrite in the journey toward achieving their personal and professional best. For wellbeing, this includes daily meditation breaks, virtual fitness, and access to Headspace. We have continuous learning opportunities available with LinkedIn Learning and more. EEO - Our philosophy: Our goal is to ensure every employee feels like they belong and are operating in a judgment-free zone regardless of gender, race, ethnicity, age, and lifestyle preference, among others. We value and celebrate diversity and fervently believe every employee matters and should be respected and heard. We believe we are stronger when we belong because collectively, we’re more innovative, creative, and successful. Sprinklr is proud to be an equal-opportunity workplace and is an affirmative-action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. See also Sprinklr’s EEO Policy and EEO is the Law. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Designation: - ML / MLOPs Engineer Location: - Noida (Sector- 132) Key Responsibilities: • Model Development & Algorithm Optimization : Design, implement, and optimize ML models and algorithms using libraries and frameworks such as TensorFlow , PyTorch , and scikit-learn to solve complex business problems. • Training & Evaluation : Train and evaluate models using historical data, ensuring accuracy, scalability, and efficiency while fine-tuning hyperparameters. • Data Preprocessing & Cleaning : Clean, preprocess, and transform raw data into a suitable format for model training and evaluation, applying industry best practices to ensure data quality. • Feature Engineering : Conduct feature engineering to extract meaningful features from data that enhance model performance and improve predictive capabilities. • Model Deployment & Pipelines : Build end-to-end pipelines and workflows for deploying machine learning models into production environments, leveraging Azure Machine Learning and containerization technologies like Docker and Kubernetes . • Production Deployment : Develop and deploy machine learning models to production environments, ensuring scalability and reliability using tools such as Azure Kubernetes Service (AKS) . • End-to-End ML Lifecycle Automation : Automate the end-to-end machine learning lifecycle, including data ingestion, model training, deployment, and monitoring, ensuring seamless operations and faster model iteration. • Performance Optimization : Monitor and improve inference speed and latency to meet real- time processing requirements, ensuring efficient and scalable solutions. • NLP, CV, GenAI Programming : Work on machine learning projects involving Natural Language Processing (NLP) , Computer Vision (CV) , and Generative AI (GenAI) , applying state-of-the-art techniques and frameworks to improve model performance. • Collaboration & CI/CD Integration : Collaborate with data scientists and engineers to integrate ML models into production workflows, building and maintaining continuous integration/continuous deployment (CI/CD) pipelines using tools like Azure DevOps , Git , and Jenkins . • Monitoring & Optimization : Continuously monitor the performance of deployed models, adjusting parameters and optimizing algorithms to improve accuracy and efficiency. • Security & Compliance : Ensure all machine learning models and processes adhere to industry security standards and compliance protocols , such as GDPR and HIPAA . • Documentation & Reporting : Document machine learning processes, models, and results to ensure reproducibility and effective communication with stakeholders. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. • 3+ years of experience in machine learning operations (MLOps), cloud engineering, or similar roles. • Proficiency in Python , with hands-on experience using libraries such as TensorFlow , PyTorch , scikit-learn , Pandas , and NumPy . • Strong experience with Azure Machine Learning services, including Azure ML Studio , Azure Databricks , and Azure Kubernetes Service (AKS) . • Knowledge and experience in building end-to-end ML pipelines, deploying models, and automating the machine learning lifecycle. • Expertise in Docker , Kubernetes , and container orchestration for deploying machine learning models at scale. • Experience in data engineering practices and familiarity with cloud storage solutions like Azure Blob Storage and Azure Data Lake . • Strong understanding of NLP , CV , or GenAI programming, along with the ability to apply these techniques to real-world business problems. • Experience with Git , Azure DevOps , or similar tools to manage version control and CI/CD pipelines. • Solid experience in machine learning algorithms , model training , evaluation , and hyperparameter tuning Show more Show less

Posted 2 weeks ago

Apply

4.0 - 10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We go beyond the obvious, using intelligence, passion and creativity to inspire new thinking and shape the world we live in. To start a career that is out of the ordinary, please apply... Job Details KANTAR is the world's leading insights, consulting, and analytics company. We understand how people think, feel, shop, share, vote, and view more than anybody else. With over 25,000 people, we combine the best of human understanding with advanced technologies to help the world's leading organizations, succeed and grow. (For more details, visit www.kantar.com) The Global Data Science and Innovation (DSI) unit of KANTAR, nested within its Analytics Practice (https://www.kantar.com/expertise/analytics), is a fast-growing team of niche, elite data scientists responsible for all data science led innovation within KANTAR. The unit has a strong internal and external reputation with global stakeholders and clients, of handling sophisticated cutting-edge analytics, using state-of-the-art techniques and deep mathematical / statistical rigor. The unit is responsible for most AI and Gen AI related initiatives within KANTAR (https://www.kantar.com/campaigns/artificial-intelligence), including bringing in the latest developments in the field of Machine Learning, Generative AI, Deep Learning, Computer Vision, NLP, Optimization, etc. to solve complex business problems in marketing analytics and consulting and build products that empower our colleagues around the world. Job profile We are looking for an Senior AI Engineer to be part of our Global Data Science and Innovation team. As part of a high-profile team, the position offers a unique opportunity to work first-hand on some of the most exciting, and challenging AI-led projects within the organization, and be part of a fast-paced, entrepreneurial environment. As a senior member of the team, you will be responsible for working with your leadership team to build a world-class portfolio of AI-led solutions within KANTAR leveraging the latest developments in AI/ML. You will be part of several initiatives to productionize multiple R&D PoCs and pilots that leverage a variety of AI/ML algorithms and technologies, particularly using (but not restricted to) Generative AI. As an experienced AI engineer, you will hold yourself accountable for the entire process of developing, scaling, and commercializing these enterprise-grade products and solutions. You will be working hands-on as well as with a team of highly talented cross-functional, geography-agnostic team of data scientists and AI engineers. As part of the global data science and innovation team, you will be a representative and ambassador for data science/AI/ML led solutions with internal and external stakeholders. Job Description Candidate will be responsible for the following: Develop and maintain scalable and efficient AI pipelines and infrastructure for deployment. Deploy AI models and solutions into production environments, ensuring stability and performance. Monitor and maintain deployed AI systems to ensure they operate effectively and efficiently. Troubleshoot and resolve issues related to AI deployment, including performance bottlenecks and system failures. Optimize deployment processes to reduce latency and improve the scalability of AI solutions. Implement robust version control and model management practices to track AI model changes and updates. Ensure the security and compliance of deployed AI systems with industry standards and regulations. Provide technical support and guidance for deployment-related queries and issues. Qualification, Experience, And Skills Advanced degree from top tier technical institutes in relevant discipline 4 to 10 years’ experience, with at least past few years working in Generative AI Prior firsthand work experience in building and deploying applications on cloud platforms like Azure/AWS/Google Cloud using serverless architecture Proficiency in tools such as Azure machine learning service, Amazon Sagemaker, Google Cloud AI Prior experience with containerization tools (for ex., Docker, Kubernetes), databases (for ex., MySQL, MongoDB), deployment tools (for ex., Azure DevOps), big data tools (for ex.,Spark). Ability to develop and integrate APIs. Experience with RESTful services. Experience with continuous integration/continuous deployment (CI/CD) pipelines. Knowledge of Agile working methodologies for product development Knowledge of (and potentially working experience with) LLMs and Foundation models from OpenAI, Google, Anthropic and others Hands on coding experience in Python Desired Skills That Would Be a Distinct Advantage Preference given to past experience in developing/maintaining live deployments. Comfortable working in global set-ups with diverse cross-geography teams and cultures. Energetic, self-driven, curious, and entrepreneurial. Excellent (English) communication skills to address both technical audience and business stakeholders. Meticulous and deep attention to detail. Being able to straddle ‘big picture’ and ‘details’ with ease. Location Chennai, Teynampet, Anna SalaiIndia Kantar Rewards Statement At Kantar we have an integrated way of rewarding our people based around a simple, clear and consistent set of principles. Our approach helps to ensure we are market competitive and also to support a pay for performance culture, where your reward and career progression opportunities are linked to what you deliver. We go beyond the obvious, using intelligence, passion and creativity to inspire new thinking and shape the world we live in. Apply for a career that’s out of the ordinary and join us. We want to create an equality of opportunity in a fair and supportive working environment where people feel included, accepted and are allowed to flourish in a space where their mental health and well being is taken into consideration. We want to create a more diverse community to expand our talent pool, be locally representative, drive diversity of thinking and better commercial outcomes. Kantar is the world’s leading data, insights and consulting company. We understand more about how people think, feel, shop, share, vote and view than anyone else. Combining our expertise in human understanding with advanced technologies, Kantar’s 30,000 people help the world’s leading organisations succeed and grow. Show more Show less

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description: Developing models: Building and improving mathematical models for trading, forecasting, or data processing Conducting research: Analysing large datasets to identify trends and insights, and conducting research on financial markets and economic conditions Coding Skills: Strong Coding Skills in Python and other languages Using unconventional data: Using unconventional data sources to drive innovation Translating algorithms: Translating mathematical models into algorithms Managing Risk: Building Mathematical Models t help manage risk of a product or strategy Predictive Modelling: Building Predictive Model based on historical data on Monte Carlo Modelling Pseudo Code: Capability to write Pseudo Code and Python Code to test a mathematical model Systems: Good understanding of the Trading Systems and trade execution process to limit latency and slippages Market Understanding: Good comprehension of the Financial and Capital Markets Other Skills: Oversee the development and implementation of investment strategies using mathematical and statistical tools Good knowledge of AI, ML and LLMs to use the advanced technologies for efficiency and productivity Proven skills of exhibiting usage of data sciences, maths or statistics in development or improvement of Investment Strategies Compliance and Risk Management: Ensure the firm’s practices comply with SEBI regulations and other relevant legal requirements. Identify mathematical and Statistical models for Risk Management of strategies and products Implement and monitor internal controls to manage and mitigate risks effectively. Qualifications: Bachelor’s degree in Maths, Master Preferred or Pursuing PhD in Maths Relevant experience in mathematics, statistics, computer science, or finance preferred Minimum of 3-5 years of experience in building Mathematical modelling and Data Sciences Proven ability to develop and execute mathematical models in live market strategies High ethical standards and a commitment to acting in clients' best interests. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Greater Bengaluru Area

On-site

Linkedin logo

Job Title : Senior Data Scientist (SDS 2) Experience: 4+ years Location : Bengaluru (Hybrid) Company Overview: Akaike Technologies is a dynamic and innovative AI-driven company dedicated to building impactful solutions across various domains . Our mission is to empower businesses by harnessing the power of data and AI to drive growth, efficiency, and value. We foster a culture of collaboration , creativity, and continuous learning , where every team member is encouraged to take initiative and contribute to groundbreaking projects. We value diversity, integrity, and a strong commitment to excellence in all our endeavors. Job Description: We are seeking an experienced and highly skilled Senior Data Scientist to join our team in Bengaluru. This role focuses on driving innovative solutions using cutting-edge Classical Machine Learning, Deep Learning, and Generative AI . The ideal candidate will possess a blend of deep technical expertise , strong business acumen, effective communication skills , and a sense of ownership . During the interview, we look for a proven track record in designing, developing, and deploying scalable ML/DL solutions in a fast-paced, collaborative environment. Key Responsibilities: ML/DL Solution Development & Deployment: Design, implement, and deploy end-to-end ML/DL, GenAI solutions, writing modular, scalable, and production-ready code. Develop and implement scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Design and implement custom models and loss functions to address data nuances and specific labeling challenges. Ability to model in different marketing scenarios of a product life cycle ( Targeting, Segmenting, Messaging, Content Recommendation, Budget optimisation, Customer scoring, risk and churn ), and data limitations(Sparse or incomplete labels, Single class learning) Large-Scale Data Handling & Processing: Efficiently handle and model billions of data points using multi-cluster data processing frameworks (e.g., Spark SQL, PySpark ). Generative AI & Large Language Models (LLMs): Leverage in-depth understanding of transformer architectures and the principles of Large and Small Language Models . Practical experience in building LLM-ready Data Management layers for large-scale structured and unstructured data . Apply foundational understanding of LLM Agents, multi-agent systems (e.g., Agent-Critique, ReACT, Agent Collaboration), advanced prompting techniques, LLM eval uation methodologies, confidence grading, and Human-in-the-Loop systems. Experimentation, Analysis & System Design: Design and conduct experiments to test hypotheses and perform Exploratory Data Analysis (EDA) aligned with business requirements. Apply system design concepts and engineering principles to create low-latency solutions capable of serving simultaneous users in real-time. Collaboration, Communication & Mentorship: Create clear solution outlines and e ffectively communicate complex technical concepts to stakeholders and team members. Mentor junior team members, providing guidance and bridging the gap between business problems and data science solutions. Work closely with cross-functional teams and clients to deliver impactful solutions. Prototyping & Impact Measurement: Comfortable with rapid prototyping and meeting high productivity expectations in a fast-paced development environment. Set up measurement pipelines to study the impact of solutions in different market scenarios. Must-Have Skills: Core Machine Learning & Deep Learning: In-depth knowledge of Artificial Neural Networks (ANN), 1D, 2D, and 3D Convolutional Neural Networks (ConvNets), LSTMs , and Transformer models. Expertise in modeling techniques such as promo mix modeling (MMM) , PU Learning , Customer Lifetime Value (CLV) , multi-dimensional time series modeling, and demand forecasting in supply chain and simulation. Strong proficiency in PU learning, single-class learning, representation learning, alongside traditional machine learning approaches. Advanced understanding and application of model explainability techniques. Data Analysis & Processing: Proficiency in Python and its data science ecosystem, including libraries like NumPy, Pandas, Dask, and PySpark for large-scale data processing and analysis. Ability to perform effective feature engineering by understanding business objectives. ML/DL Frameworks & Tools: Hands-on experience with ML/DL libraries such as Scikit-learn, TensorFlow/Keras, and PyTorch for developing and deploying models. Natural Language Processing (NLP): Expertise in traditional and advanced NLP techniques, including Transformers (BERT, T5, GPT), Word2Vec, Named Entity Recognition (NER), topic modeling, and contrastive learning. Cloud & MLOps: Experience with the AWS ML stack or equivalent cloud platforms. Proficiency in developing scalable deployment pipelines using Docker and AWS services (ECR, Lambda, Step Functions). Problem Solving & Research: Strong logical and reasoning skills. Good understanding of the Python Ecosystem and experience implementing research papers. Collaboration & Prototyping: Ability to thrive in a fast-paced development and rapid prototyping environment. Relevant to Have: Expertise in Claims data and a background in the pharmaceutical industry . Awareness of best software design practices . Understanding of backend frameworks like Flask. Knowledge of Recommender Systems, Representative learning, PU learning. Benefits and Perks: Competitive ESOP grants. Opportunity to work with Fortune 500 companies and world-class teams. Support for publishing papers and attending academic/industry conferences. Access to networking events, conferences, and seminars. Visibility across all functions at Akaike, including sales, pre-sales, lead generation, marketing, and hiring. Appendix Technical Skills (Must Haves) Having deep understanding of the following Data Processing : Wrangling : Some understanding of querying database (MySQL, PostgresDB etc), very fluent in the usage of the following libraries Pandas, Numpy, Statsmodels etc. Visualization : Exposure towards Matplotlib, Plotly, Altair etc. Machine Learning Exposure : Machine Learning Fundamentals, For ex: PCA, Correlations, Statistical Tests etc. Time Series Models, For ex: ARIMA, Prophet etc. Tree Based Models, For ex: Random Forest, XGBoost etc.. Deep Learning Models, For ex: Understanding and Experience of ConvNets, ResNets, UNets etc. GenAI Based Models : Experience utilizing large-scale language models such as GPT-4 or other open-source alternatives (such as Mistral, Llama, Claude) through prompt engineering and custom finetuning. Code Versioning Systems : Github, Git If you're interested in the job opening, please apply through the Keka link provided here: https://akaike.keka.com/careers/jobdetails/26215 Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. Do you love Big Data? Deploying Machine Learning models? Challenging optimization problems? Knowledgeable, collaborative co-workers? Come work at eBay and help us redefine global, online commerce! Who Are We? The Product Knowledge team is at the epicenter of eBay’s Tech-driven, Customer-centric overhaul. Our team is entrusted with creating and using eBay’s Product Knowledge - a vast Big Data system which is built up of listings, transactions, products, knowledge graphs, and more. Our team has a mix of highly proficient people from multiple fields such as Machine Learning, Data Science, Software Engineering, Operations, and Big Data Analytics. We have a strong culture of collaboration, and plenty of opportunity to learn, make an impact, and grow! What Will You Do We are looking for exceptional engineering tech leaders and architects, who take pride in creating simple solutions to apparently-complex problems. Our Engineering tasks typically involve at least one of the following: Architect scalable data pipelines processing billions of items, integrating ML models. Design low-latency search/IR services on datasets of hundreds of millions of items. Craft API designs and drive integration between data layers and customer-facing applications. Design and run A/B tests in production to measure new functionality impact. If you love a good challenge, and are good at handling complexity - we’d love to hear from you! eBay is an amazing company to work for. Being on the team, you can expect to benefit from: A competitive salary - including stock grants and a yearly bonus A healthy work culture that promotes business impact and at the same time highly values your personal well-being Being part of a force for good in this world - eBay truly cares about its employees, its customers, and the world’s population, and takes every opportunity to make this clearly apparent Job Responsibilities Architect and drive strategic evolution of data pipelines, ML frameworks, and service infrastructure. Define and lead performance optimization strategies for critical systems. Collaborate on project scope and define long-term architectural vision. Develop and champion technical strategies aligned with business objectives. Lead cross-functional architectural initiatives, ensuring coherent solutions. Establish and champion organization-wide knowledge sharing and best practices. Minimum Qualifications Passion and commitment for technical excellence B.Sc. or M.Sc. in Computer Science or an equivalent professional experience 8+ years of software design, architecture, and development experience, tackling complex problems in backend services and / or data pipelines Solid foundation in Data Structures, Algorithms, Object-Oriented Programming, and Software Design Architectural expertise in production-grade systems using Java, Python/Scala. Strategic design and operational leadership of large-scale Big Data processing pipelines (Hadoop, Spark). Proven ability to resolve complex architectural challenges in production software systems. Executive-level communication and collaboration skills for influencing technical direction. Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information. Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. Machine Learning Engineer Tech Leader (T26), Product Knowledge Do you love Big Data? Deploying Machine Learning models? Challenging optimization problems? Knowledgeable, collaborative co-workers? Come work at eBay and help us redefine global, online commerce! Who Are We? The Product Knowledge team is at the epicenter of eBay’s Tech-driven, Customer-centric overhaul. Our team is entrusted with creating and using eBay’s Product Knowledge - a vast Big Data system which is built up of listings, transactions, products, knowledge graphs, and more. Our team has a mix of highly proficient people from multiple fields such as Machine Learning, Data Science, Software Engineering, Operations, and Big Data Analytics. We have a strong culture of collaboration, and plenty of opportunity to learn, make an impact, and grow! What Will You Do We Are Looking For Exceptional Engineers, Who Take Pride In Creating Simple Solutions To Apparently-complex Problems. Our Engineering Tasks Typically Involve At Least One Of The Following Building a pipeline that processes up to billions of items, frequently employing ML models on these datasets Creating services that provide Search or other Information Retrieval capabilities at low latency on datasets of hundreds of millions of items Crafting sound API design and driving integration between our Data layers and Customer-facing applications and components Designing and running A/B tests in Production experiences in order to vet and measure the impact of any new or improved functionality If you love a good challenge, and are good at handling complexity - we’d love to hear from you! eBay is an amazing company to work for. Being on the team, you can expect to benefit from: A competitive salary - including stock grants and a yearly bonus A healthy work culture that promotes business impact and at the same time highly values your personal well-being Being part of a force for good in this world - eBay truly cares about its employees, its customers, and the world’s population, and takes every opportunity to make this clearly apparent Job Responsibilities Design, deliver, and maintain significant features in data pipelines, ML processing, and / or service infrastructure Optimize software performance to achieve the required throughput and / or latency Work with your manager, peers, and Product Managers to scope projects and features Come up with a sound technical strategy, taking into consideration the project goals, timelines, and expected impact Take point on some cross-team efforts, taking ownership of a business problem and ensuring the different teams are in sync and working towards a coherent technical solution Take active part in knowledge sharing across the organization - both teaching and learning from others Minimum Qualifications Passion and commitment for technical excellence B.Sc. or M.Sc. in Computer Science or an equivalent professional experience 7+ years of software design and development experience, tackling non-trivial problems in backend services and / or data pipelines A solid foundation in Data Structures, Algorithms, Object-Oriented Programming, Software Design/architecture, and core Statistics knowledge Experience in production-grade coding in Java, and Python/Scala Experience in the close examination of data, computation of statistic, and data insights Experience in designing and operating Big Data processing pipelines, such as: Hadoop and Spark Excellent verbal and written communication and collaboration skills Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information. Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At eBay, we're more than a global ecommerce leader — we’re changing the way the world shops and sells. Our platform empowers millions of buyers and sellers in more than 190 markets around the world. We’re committed to pushing boundaries and leaving our mark as we reinvent the future of ecommerce for enthusiasts. Our customers are our compass, authenticity thrives, bold ideas are welcome, and everyone can bring their unique selves to work — every day. We're in this together, sustaining the future of our customers, our company, and our planet. Join a team of passionate thinkers, innovators, and dreamers — and help us connect people and build communities to create economic opportunity for all. Do you love Big Data? Deploying Machine Learning models? Challenging optimization problems? Knowledgeable, collaborative co-workers? Come work at eBay and help us redefine global, online commerce! Who Are We? The Product Knowledge team is at the epicenter of eBay’s Tech-driven, Customer-centric overhaul. Our team is entrusted with creating and using eBay’s Product Knowledge - a vast Big Data system which is built up of listings, transactions, products, knowledge graphs, and more. Our team has a mix of highly proficient people from multiple fields such as Machine Learning, Data Science, Software Engineering, Operations, and Big Data Analytics. We have a strong culture of collaboration, and plenty of opportunity to learn, make an impact, and grow! What Will You Do We are looking for exceptional Engineers, who take pride in creating simple solutions to apparently-complex problems. Our Engineering tasks typically involve at least one of the following: Building a pipeline that processes up to billions of items, frequently employing ML models on these datasets Creating services that provide Search or other Information Retrieval capabilities at low latency on datasets of hundreds of millions of items Crafting sound API design and driving integration between our Data layers and Customer-facing applications and components Designing and running A/B tests in Production experiences in order to vet and measure the impact of any new or improved functionality If you love a good challenge, and are good at handling complexity - we’d love to hear from you! eBay is an amazing company to work for. Being on the team, you can expect to benefit from: A competitive salary - including stock grants and a yearly bonus A healthy work culture that promotes business impact and at the same time highly values your personal well-being Being part of a force for good in this world - eBay truly cares about its employees, its customers, and the world’s population, and takes every opportunity to make this clearly apparent Job Responsibilities Design, deliver, and maintain significant features in data pipelines, ML processing, and / or service infrastructure Optimize software performance to achieve the required throughput and / or latency Work with your manager, peers, and Product Managers to scope projects and features Come up with a sound technical strategy, taking into consideration the project goals, timelines, and expected impact Take point on some cross-team efforts, taking ownership of a business problem and ensuring the different teams are in sync and working towards a coherent technical solution Take active part in knowledge sharing across the organization - both teaching and learning from others Minimum Qualifications Passion and commitment for technical excellence B.Sc. or M.Sc. in Computer Science or an equivalent professional experience 4+ years of software design and development experience, tackling non-trivial problems in backend services and / or data pipelines A solid foundation in Data Structures, Algorithms, Object-Oriented Programming, Software Design, and core Statistics knowledge Experience in production-grade coding in Java, and Python/Scala Experience in the close examination of data and computation of statistics Experience in designing and operating Big Data processing pipelines, such as: Hadoop and Spark Excellent verbal and written communication and collaboration skills Please see the Talent Privacy Notice for information regarding how eBay handles your personal data collected when you use the eBay Careers website or apply for a job with eBay. eBay is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, sex, sexual orientation, gender identity, veteran status, and disability, or other legally protected status. If you have a need that requires accommodation, please contact us at talent@ebay.com. We will make every effort to respond to your request for accommodation as soon as possible. View our accessibility statement to learn more about eBay's commitment to ensuring digital accessibility for people with disabilities. The eBay Jobs website uses cookies to enhance your experience. By continuing to browse the site, you agree to our use of cookies. Visit our Privacy Center for more information. Show more Show less

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies