Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 years
0 Lacs
Bengaluru
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team At Roku, we believe great search is the front door to great entertainment. Our Search Platform sits at the heart of the Roku experience, powering voice, text, and visual discovery across 100M+ active accounts and every Roku-powered device worldwide. We own the entire stack from ingesting & enriching a multi-million-title knowledge graph, to low-latency retrieval services and large-scale machine-learning systems that personalize results in real time. Our work doesn’t just help users find shows; it drives core product surfaces (home-screen rows, browse hubs, Roku Voice, mobile app search) and fuels partner monetization. If you enjoy turning cutting-edge research into products used billions of times a day, you’ll thrive here. About the role Roku’s footprint has more than quadrupled in the past five years, and user expectations have leaped just as fast—think LLM-based query understanding, vector-DB retrieval, on-device models, and multimodal search (voice, text, image). We’re now rebuilding our relevance stack for the next decade, blending classic IR with generative-AI techniques. You will be a technical leader spearheading that transformation. What you'll be doing Apply state of the art ML on search using techniques in deep learning, bandits, transformers, LLMs, causal inference, and optimizations to make our users more delighted and engaged on the platform Run online AB tests and analyze them against the critical business KPIs Collaborate with US engineering teams as well as cross-functional teams to translate business requirements into technical specifications Nurture our ML ecosystem to make it withstand scale, developer velocity and future business shifts Provide technical leadership to drive technical and ML roadmap for search ranking and monetization Help recruiting new engineers. Interview, train, and mentor new team members We're excited if you have 7+ years of experience (or PhD with 5 years of experience) applying Machine Learning to concrete problems at large-scale in domains like recommendation or search or ads Strong CS fundamentals. Should be able covert ideas to code with ease Good understanding of machine learning fundamentals like classification, deep neural nets, and sequence-based models. Familiarity with modern NLP stack and multi-modal representation learning is a plus We'd love to see that you've worked with big data systems (Spark, S3, and Airflow) and can program (Java, Scala, or Python) Good understanding of system architecture Experience in big data technologies and streaming architecture, data pipelines, etc. MS in Computer Science, Statistics, or related field, but a Ph.D. in CS or related fields is preferred #LI-NK1 Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.
Posted 3 weeks ago
12.0 years
0 Lacs
Bengaluru
Remote
Company Overview Docusign brings agreements to life. Over 1.5 million customers and more than a billion people in over 180 countries use Docusign solutions to accelerate the process of doing business and simplify people’s lives. With intelligent agreement management, Docusign unleashes business-critical data that is trapped inside of documents. Until now, these were disconnected from business systems of record, costing businesses time, money, and opportunity. Using Docusign’s Intelligent Agreement Management platform, companies can create, commit, and manage agreements with solutions created by the #1 company in e-signature and contract lifecycle management (CLM). What You'll Do The Docusign AI team is seeking an experienced and driven engineering leader to build and enhance its machine learning platform and infrastructure. This role is crucial for developing industry-leading Document AI products that leverage extensive text and image data, along with advanced computer vision and natural language models. You'll support the entire machine learning lifecycle, from research to scalable deployment, empowering a global team of machine learning engineers and data scientists. Join us in pioneering the Smart Agreement Cloud, simplifying contract processes and providing automated expertise to customers worldwide. This position is part of the Core AI platform team, collaborating with colleagues across the Bay Area, Seattle, and Europe. This position is a people manager role reporting to the Director of Machine Learning Engineering. Responsibility Be the Engineering leader responsible for executing on our product roadmap using agile practices and will champion the culture, processes, and tools required to maintain a frictionless high quality development environment Design distributed systems at scale with enterprise-grade security and reliability Drive testing, releases, and monitoring of machine learning based products Enable team productivity through correct processes, automation, and reviews to enforce high quality and maintainable code across the platform Collaborate with Applied Science and Product teams to establish requirements and timelines for successful product delivery Work collaboratively across different time zones and geographies Develop highly scalable and reliable machine learning infrastructure that can support efficient model inference, training, and data telemetry collection Build vital tools and infrastructure to monitor and understand the performance of complex systems Job Designation Hybrid: Employee divides their time between in-office and remote work. Access to an office location is required. (Frequency: Minimum 2 days per week; may vary by team but will be weekly in-office expectation) Positions at Docusign are assigned a job designation of either In Office, Hybrid or Remote and are specific to the role/job. Preferred job designations are not guaranteed when changing positions within Docusign. Docusign reserves the right to change a position's job designation depending on business needs and as permitted by local law. What You Bring Basic 12+ years of combined software development and management experience across at least 3 languages or paradigms, and frameworks or equivalent experience 8+ years of hands on software development or equivalent experience Experience designing/building/consuming RESTful and gRPC based web-services Experience with CI/CD build pipelines, integration testing, unit testing, and test-driven development Familiarity with cloud deployment technologies, such as Kubernetes or Docker containers Experience with highly scalable, high-volume distributed systems and services with top-tier resiliency, availability, and performance Preferred Experience building machine learning products, data pipelines, or machine learning training and deployment systems Experience with deployment and monitoring of machine learning models Ability and desire to move across technology stacks fluently and easily Strong people management skills, including coaching, mentoring, and leading technical teams Life At Docusign Working here Docusign is committed to building trust and making the world more agreeable for our employees, customers and the communities in which we live and work. You can count on us to listen, be honest, and try our best to do what’s right, every day. At Docusign, everything is equal. We each have a responsibility to ensure every team member has an equal opportunity to succeed, to be heard, to exchange ideas openly, to build lasting relationships, and to do the work of their life. Best of all, you will be able to feel deep pride in the work you do, because your contribution helps us make the world better than we found it. And for that, you’ll be loved by us, our customers, and the world in which we live. Accommodation Docusign is committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. If you need such an accommodation, or a religious accommodation, during the application process, please contact us at accommodations@docusign.com. If you experience any issues, concerns, or technical difficulties during the application process please get in touch with our Talent organization at taops@docusign.com for assistance. Our global benefits Paid time off Take time to unwind with earned days off, plus paid company holidays based on your region. Paid parental leave Take up to six months off with your child after birth, adoption or foster care placement. Full health benefits Options for 100% employer-paid health plans from day one of employment. Retirement plans Select retirement and pension programs with potential for employer contributions. Learning & development Grow your career with coaching, online courses and education reimbursements. Compassionate care leave Paid time off following the loss of a loved one and other life-changing events.
Posted 3 weeks ago
3.0 years
0 Lacs
Bengaluru
On-site
Imagine what you could do here. At Apple, new ideas have a way of becoming phenomenal products, services, and customer experiences very quickly. Every single day, people do amazing things at Apple. Do you want to impact the future of Manufacturing here at Apple through cutting edge ML techniques? This position involves a wide variety of skills, innovation, and is a rare opportunity to be working on ground breaking, new applications of machine-learning, research and implementation. Ultimately, your work would have a huge impact on billions of users across the globe. You can help inspire change, by using your skills to influence globally recognized products' supply chain. The goal of Apple's Manufacturing & Operations team is to take a vision of a product and turn it into a reality. Through the use of statistics, the scientific process, and machine learning, the team recommends and implements solutions to the most challenging problems. We’re looking for experienced machine learning professionals to help us revolutionize how we manufacture Apple’s amazing products. Put your experience to work in this highly visible role. Description Operations Advanced Analytics team is looking for creative and motivated hands on individual contributors who thrive in dynamic environment and enjoy working with multi-functional teams. As a member of our team, you will work on applied machine-learning algorithms to seek problems that focus on topics such as classification, regression, clustering, optimizations and other related algorithms to impact and optimize Apple’s supply chain and manufacturing processes. As a part of this role, you would work with the team to build end to end machine learning systems and modules, and deploy the models to our factories. You'll be collaborating with Software Engineers, Machine Learning Engineers, Operations, and Hardware Engineering teams across the company. Minimum Qualifications 3+ years experience in machine learning algorithms, software engineering, and data mining models with an emphasis on large language models (LLM) or large multimodal models (LMM). Masters in Machine Learning, Artificial intelligence, Computer Science, Statistics, Operations Research, Physics, Mechanical Engineering, Electrical Engineering or related field. Preferred Qualifications Proven experience in LLM and LMM development, fine-tuning, and application building. Experience with agents and agentic workflows is a major plus. Experience with modern LLM serving and inference frameworks, including vLLM for efficient model inference and serving. Hands-on experience with LangChain and LlamaIndex, enabling RAG applications and LLM orchestration. Strong software development skills with proficiency in Python. Experienced user of ML and data science libraries such as PyTorch, TensorFlow, Hugging Face Transformers, and scikit-learn. Familiarity with distributed computing, cloud infrastructure, and orchestration tools, such as Kubernetes, Apache Airflow (DAG), Docker, Conductor, Ray for LLM training and inference at scale is a plus. Deep understanding of transformer-based architectures (e.g., BERT, GPT, LLaMA) and their optimization for low-latency inference. Ability to meaningfully present results of analyses in a clear and impactful manner, breaking down complex ML/LLM concepts for non-technical audiences. Experience applying ML techniques in manufacturing, testing, or hardware optimization is a major plus. Proven experience in leading and mentoring teams is a plus. Submit CV
Posted 3 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
As a Python Developer Intern at Arcitech AI, you will play a crucial role in our advancements in software development, AI, and integrative solutions. This entry-level position offers the opportunity to work on cutting-edge projects and contribute to the growth of the company. You will be challenged to develop Python applications, collaborate with a dynamic team, and optimize code performance, all while gaining valuable experience in the industry. Responsibilities Assist in designing, developing, and maintaining Python applications focused on backend and AI/ML components under senior engineer guidance. Help build and consume RESTful or GraphQL APIs integrating AI models and backend services, following established best practices. Containerize microservices (including AI workloads) using Docker and support Kubernetes deployment and management tasks. Implement and monitor background jobs with Celery (e.g., data processing, model training/inference), including retries and basic alerting. Integrate third-party services and AI tools via webhooks and APIs (e.g., Stripe, Razorpay, external AI providers) in collaboration with the team. Set up simple WebSocket consumers using Django Channels for real-time AI-driven and backend features. Aid in configuring AWS cloud infrastructure (EC2, S3, RDS) as code, assist with backups, monitoring via CloudWatch, and support AI workload deployments. Write unit and integration tests using pytest or unittest to maintain ≥ 80% coverage across backend and AI codebases. Follow Git branching strategies and contribute to CI/CD pipeline maintenance and automation for backend and AI services. Participate actively in daily tech talks, knowledge-sharing sessions, code reviews, and team collaboration focused on backend and AI development. Assist with implementing AI agent workflows and document retrieval pipelines using LangChain and LlamaIndex (GPT Index) frameworks. Maintain clear and up-to-date documentation of code, experiments, and processes. Participate in Agile practices including sprint planning, stand-ups, and retrospectives. Demonstrate basic debugging and troubleshooting skills using Python tools and log analysis. Handle simple data manipulation tasks involving CSV, JSON, or similar formats. Follow secure coding best practices and be mindful of data privacy and compliance. Exhibit strong communication skills, a proactive learning mindset, and openness to feedback. Required Qualifications Currently pursuing a Bachelor’s degree in Computer Science, Engineering, Data Science, or related scientific fields. Solid foundation in Python programming with familiarity in common libraries (NumPy, pandas, etc.). Basic understanding of RESTful/GraphQL API design and consumption. Exposure to Docker and at least one cloud platform (AWS preferred). Experience or willingness to learn test-driven development using pytest or unittest. Comfortable with Git workflows and CI/CD tools. Strong problem-solving aptitude and effective communication skills. Preferred (But Not Required) Hands-on experience or coursework with AI/ML frameworks such as TensorFlow, PyTorch, or Keras. Prior exposure to Django web framework and real-time WebSocket development (Django Channels). Familiarity with LangChain and LlamaIndex (GPT Index) for building AI agents and retrieval-augmented generation workflows. Understanding of machine learning fundamentals (neural networks, computer vision, NLP). Background in data analysis, statistics, or applied mathematics.
Posted 3 weeks ago
7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the Company: We are one of India’s premier integrated political consulting firms specializing in building data-driven 360-degree election campaigns. We help our clients with strategic advice and implementation which brings together data-backed insights and in-depth ground intelligence into a holistic electoral campaign. We are passionate about our democracy and the politics that shape the world around us. We draw on some of the sharpest minds from distinguished institutions and diverse professional backgrounds to help us achieve our goal. The team brings in 7 years of experience in building electoral strategies that spark conversations, effect change, and help shape electoral and legislative ecosystems in our country. Job Summary: We are looking for a motivated and detail-oriented Statistics Intern to join our team. This internship offers an excellent opportunity to apply academic knowledge of statistics and data analysis in a real-world setting. The intern will assist in data cleaning, statistical modeling, visualization, and research reporting across various projects. Key Responsibilities: 1.Assist in collecting, organizing, and cleaning datasets for analysis 2.Conduct basic statistical analyses (e.g., descriptive stats, cross-tabulations, hypothesis tests) 3.Support the development of charts, graphs, and summary reports 4.Help build and/or validate statistical models under supervision (e.g., regression, classification) 5.Collaborate with team members to interpret results and draw meaningful insights 6.Document methods and maintain organized records of code and findings. Required/Minimum Qualifications: 1.Currently pursuing(Masters) or recently completed a Bachelor’s degree in Statistics, Mathematics, Economics, Data Science, or a related field 2.Basic understanding of statistical concepts(Probability Statistics, Bayesian Inference, Hypothesis testing etc.)and data structures- query writing skills and data automation. 3.Familiarity with statistical software such as R,SPSS, Stata etc. 4.Working knowledge/Demonstrated ability to code using numerical/statistical/MLlibraries(NumPy, Statsmodel, Pandas etc.) Python is a must. 5.Ability to work with datasets, conduct exploratory data analysis, and interpret output 6.Strong attention to detail and problem-solving abilities 7.Good written and verbal communication skills Good to have Skills: 1.Experience with data visualization tools or packages (e.g., ggplot2, matplotlib, Tableau) 2.Knowledge of survey data, experimental design, or basic machine learning techniques such as KNN and NLM’s. 3.Ability to write clean, reproducible code (e.g., using data automation tools in Excel such as VBA or python scripts.) Location: BLR - 4th Floor, VK Kalyani Commercial Complex, Opp to BDA Sanky Road, Bangalore, 560021
Posted 3 weeks ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
Remote
We’re hiring for a market-leading edge computing startup that’s building AI infrastructure for remote and low-connectivity environments. Their mission is to power real-time, on-premise intelligence across industries like mining, agriculture, energy, and defense. They’re looking for experienced AI engineers who enjoy tackling real-world ML challenges at the edge. Location: Full-time Onsite – Thiruvananthapuram, Kerala You’ll be a good fit if: * You enjoy solving practical AI problems in real-time, low-latency settings * You can deploy ML models end-to-end – from training to production * You’ve worked with transformers, computer vision, or autonomous systems * You thrive in fast-paced, high-impact environments Key Responsibilities: * Build, train, and deploy ML models (transformers, CV) * Containerize and deploy models to production (cloud or edge) * Collaborate with internal teams and customers * Build datasets, pipelines, and benchmarking workflows * Implement transfer learning or online learning methods Required Skills: * Strong Python; experience with PyTorch or TensorFlow * 5+ years in ML or software engineering * Familiar with CNNs, transformers, BERT, GANs, etc. * Understanding of supervised, unsupervised, and transfer learning * Experience deploying models using containers or microservices Preferred: * Experience with real-time ML for robotics, drones, or control systems * Knowledge of on-device inference or model optimization * Exposure to Kubernetes or edge deployment setups * Clear communicator and team player Location: Thiruvananthapuram, Kerala (Onsite, Full-time)
Posted 3 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
By submitting your email address and any other personal information to this website, you consent to such information being collected, held, used and disclosed in accordance with our PRIVACY POLICY and our website TERMS AND CONDITIONS OUR STORY: At ContractPodAi, we're pioneering the future of legal with Leah—the operating system for legal. Leah Agentic AI coordinates specialized AI agents across Leah’s suite of solutions, including industry-leading Contract Lifecycle Management (CLM), to transform how legal teams work and create value. Leah doesn't just automate tasks—it uncovers hidden opportunities and transforms legal knowledge into business advantage. Our platform breaks down silos between legal, business, and executive teams, helping organizations discover revenue opportunities, minimize risks, and turn legal insights into strategic decisions. We know innovation happens when great people come together to solve business problems. ContractPodAi is a fast-growing team of innovators spread across London, New York, Glasgow, San Francisco, Toronto, Dubai, Sydney, Mumbai, Pune, and beyond. Here, you'll: • Pioneer the future of legal AI and business transformation • Make real impact by helping organizations unlock hidden value • Collaborate with talented colleagues across continents. If you're excited by cutting-edge technology, thrive in a fast-paced environment, and want to help build something revolutionary, we want to hear from you. THE OPPORTUNITY We are seeking an experienced AI Engineer to join our growing team at ContractPodAi. In this role, you will design, develop, and deploy intelligent systems that power next-generation features in our contract lifecycle management (CLM) platform. You will work at the intersection of machine learning, software engineering, and agentic AI to create autonomous, goal-driven agents capable of reasoning, learning, and acting in dynamic environments. This is your opportunity to play a pivotal role in advancing the capabilities of legal tech with powerful agent-based systems built on the latest advancements in large language models, reinforcement learning, and autonomous AI frameworks. WHAT YOU WILL DO: Architect and implement scalable agentic AI systems that autonomously execute complex workflows and reason over legal data. Research, prototype, and productionize ML/DL models, especially in natural language processing and understanding (NLP/NLU). Build and deploy intelligent legal agents that can interpret documents, make decisions, and collaborate with users or other agents to complete multi-step tasks. Utilize modern frameworks and platforms (e.g., LangChain, AutoGen, OpenAI Function Calling, Semantic Kernel) to build multi-agent workflows. Fine-tune and integrate large language models (LLMs) using PEFT, LoRA, and RAG techniques tailored to legal domain challenges. Design and implement robust infrastructure for managing AI lifecycle, including training, inference, monitoring, and continuous learning. Collaborate with legal experts, product managers, and engineering teams to create explainable and trustworthy AI systems. Contribute to the development of our AI strategy for agent-based automation within legal operations and contract management. WHAT YOU WILL NEED: 4+ Years of experience and a strong background in computer science, software engineering, or data science with a deep focus on machine learning and NLP. Demonstrated experience building or integrating agentic AI systems (e.g., AutoGPT-style agents, goal-oriented LLM pipelines, multi-agent frameworks). Proficiency in Python and ML/NLP libraries such as HuggingFace Transformers, LangChain, PyTorch, TensorFlow, and Spacy. Experience developing and scaling ML models (including LSTMs, BERT, Transformers) for real-world applications. Understanding of LLM training (e.g., OpenAI, LLAMA, Falcon), embeddings, and prompt engineering. Hands-on experience with Reinforcement Learning (e.g., PPO, RLHF, RLAIF). Experience extracting text and semantic information from structured and unstructured documents (PDFs, Images, etc.). Comfort working in Agile/Scrum environments and collaborating across cross-functional teams. Passion for innovation in AI and a strong desire to build autonomous systems that solve complex, real-world problems. BENEFITS: Competitive salary Opportunity to work in a fast-moving, high growth SaaS company Paid Time off Generous Employee Referral program At ContractPodAi we believe in creating a diverse and inclusive workplace where everyone feels heard and valued. We are proud to be an Equal Opportunity Employer. We do not discriminate in employment on the basis of race, color, religion, sex, national origin, political affiliation, sexual orientation, marital status, disability, genetic information, age, membership in an employee organization, retaliation, parental status, military service, or other non-merit factor.
Posted 3 weeks ago
6.0 years
20 - 25 Lacs
Bengaluru, Karnataka, India
Remote
:-Job Title: Machine Learning Engineer – 2 Location: Onsite – Bengaluru, Karnataka, India Experience Required: 3 – 6 Years Compensation: ₹20 – ₹25 LPA Employment Type: Full-Time Work Mode: Onsite Only (No Remote) About the Company:- A fast-growing Y Combinator-backed SaaS startup is revolutionizing underwriting in the insurance space through AI and Generative AI. Their platform empowers insurance carriers in the U.S. to make faster, more accurate decisions by automating key processes and enhancing risk assessment. As they expand their AI capabilities, they’re seeking a Machine Learning Engineer – 2 to build scalable ML solutions using NLP, Computer Vision, and LLM technologies. Role Overview:- As a Machine Learning Engineer – 2, you'll take ownership of designing, developing, and deploying ML systems that power critical features across the platform. You'll lead end-to-end ML workflows, working with cross-functional teams to deliver real-world AI solutions that directly impact business outcomes. Key Responsibilities:- Design and develop robust AI product features aligned with user and business needs Maintain and enhance existing ML/AI systems Build and manage ML pipelines for training, deployment, monitoring, and experimentation Deploy scalable inference APIs and conduct A/B testing Optimize GPU architectures and fine-tune transformer/LLM models Build and deploy LLM applications tailored to real-world use cases Implement DevOps/ML Ops best practices with tools like Docker and Kubernetes Tech Stack & Tools Machine Learning & LLMs GPT, LLaMA, Gemini, Claude, Hugging Face Transformers PyTorch, TensorFlow, Scikit-learn LLMOps & MLOps Langchain, LangGraph, LangFlow, Langfuse MLFlow, SageMaker, LlamaIndex, AWS Bedrock, Azure AI Cloud & Infrastructure AWS, Azure Kubernetes, Docker Databases MongoDB, PostgreSQL, Pinecone, ChromaDB Languages Python, SQL, JavaScript What You’ll Do Collaborate with product, research, and engineering teams to build scalable AI solutions Implement advanced NLP and Generative AI models (e.g., RAG, Transformers) Monitor and optimize model performance and deployment pipelines Build efficient, scalable data and feature pipelines Stay updated on industry trends and contribute to internal innovation Present key insights and ML solutions to technical and business stakeholders Requirements Must-Have:- 3–6 years of experience in Machine Learning and software/data engineering Master’s degree (or equivalent) in ML, AI, or related technical fields Strong hands-on experience with Python, PyTorch/TensorFlow, and Scikit-learn Familiarity with ML Ops, model deployment, and production pipelines Experience working with LLMs and modern NLP techniques Ability to work collaboratively in a fast-paced, product-driven environment Strong problem-solving and communication skills Bonus Certifications such as: AWS Machine Learning Specialty AWS Solution Architect – Professional Azure Solutions Architect Expert Why Apply Work directly with a high-caliber founding team Help shape the future of AI in the insurance space Gain ownership and visibility in a product-focused engineering role Opportunity to innovate with state-of-the-art AI/LLM tech Be part of a fast-moving team with real market traction 📍 Note: This is an onsite-only role based in Bengaluru. Remote work is not available. Skills: software/data engineering,machine learning,ml, ai,sql,computer vision,tensorflow,ml ops,nlp,kubernetes,mongodb,llms and modern nlp techniques,python,postgresql,docker,azure,scikit-learn,python, pytorch/tensorflow, and scikit-learn,javascript,aws,llm technologies,pytorch
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Company Description It all started in sunny San Diego, California in 2004 when a visionary engineer, Fred Luddy, saw the potential to transform how we work. Fast forward to today — ServiceNow stands as a global market leader, bringing innovative AI-enhanced technology to over 8,100 customers, including 85% of the Fortune 500®. Our intelligent cloud-based platform seamlessly connects people, systems, and processes to empower organizations to find smarter, faster, and better ways to work. But this is just the beginning of our journey. Join us as we pursue our purpose to make the world work better for everyone. Job Description Team Overview: Join our pioneering Core-LLM platform team, dedicated to pushing the boundaries of Generative AI. We focus on developing robust, scalable, and safe machine learning models, particularly LLMs, SLMs, Large Reasoning Models (LRMs) and SRMs that power cutting-edge ServiceNow products and features. As a Senior Manager, you will lead a talented team of machine learning engineers, shaping the future of our AI capabilities and ensuring the ethical and effective deployment of our technology. What You Get To Do In This Role Generate and evaluate synthetic data tailored to improve the robustness, performance, and safety of machine learning models, particularly large language models (LLMs). Train and fine-tune models using curated datasets, optimizing for performance, reliability, and scalability. Design and implement evaluation metrics to rigorously measure and monitor model quality, safety, and effectiveness. Conduct experiments to validate model behavior and improve generalization across diverse use cases. Collaborate with engineering and research teams to identify risks and recommend AI safety mitigation strategies. Participate in the development, deployment, and continuous improvement of end-to-end AI solutions. Contribute to architectural and technology decisions related to AI infrastructure, frameworks, and tooling. Promote modern engineering practices including continuous integration, continuous delivery, and containerized workflows. Qualifications Key qualifications: Experience in using AI Productivity tools such as Cursor, Windsurf, etc. is a plus or nice to have Experience with methods of training and fine-tuning large language models, such as distillation, supervised fine-tuning, and policy optimization 5+ years of experience in machine learning, deep learning, and AI systems. Proficiency in Python and frameworks like PyTorch, TensorFlow, and NumPy. Experience in synthetic data generation, model training, and evaluation in real-world environments. Solid understanding of LLM fine-tuning, prompting, and robustness techniques. Knowledge of AI safety principles and experience identifying and mitigating model risks. Hands-on experience deploying and optimizing models using platforms such as Triton Inference Server. Familiarity with CI/CD, automated testing, and container orchestration tools like Docker and Kubernetes. Experience with prompt engineering: ability to craft, test, and optimize prompts for task accuracy and efficiency Additional Information Work Personas We approach our distributed world of work with flexibility and trust. Work personas (flexible, remote, or required in office) are categories that are assigned to ServiceNow employees depending on the nature of their work and their assigned work location. Learn more here. To determine eligibility for a work persona, ServiceNow may confirm the distance between your primary residence and the closest ServiceNow office using a third-party service. Equal Opportunity Employer ServiceNow is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, national origin or nationality, ancestry, age, disability, gender identity or expression, marital status, veteran status, or any other category protected by law. In addition, all qualified applicants with arrest or conviction records will be considered for employment in accordance with legal requirements. Accommodations We strive to create an accessible and inclusive experience for all candidates. If you require a reasonable accommodation to complete any part of the application process, or are unable to use this online application and need an alternative method to apply, please contact globaltalentss@servicenow.com for assistance. Export Control Regulations For positions requiring access to controlled technology subject to export control regulations, including the U.S. Export Administration Regulations (EAR), ServiceNow may be required to obtain export control approval from government authorities for certain individuals. All employment is contingent upon ServiceNow obtaining any export license or other approval that may be required by relevant export control authorities. From Fortune. ©2025 Fortune Media IP Limited. All rights reserved. Used under license.
Posted 3 weeks ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
🚀 We're Hiring: Senior Data Engineer – Cloud & Database Deployment📊☁️ Location: Gurugram | Job Type: Full-Time Are you passionate about building scalable data infrastructure in cloud environments? Do you have expertise in PostgreSQL, MySQL, MongoDB, and deploying data solutions on cloud platforms? If so, we want to hear from you! 🔍 About the Role: We’re seeking a motivated Senior Data Engineer with 7–8+ years of experience to design, deploy, and maintain robust data pipelines and cloud-native data solutions. Your work will support cutting-edge AI/ML initiatives, optimize data systems, and ensure seamless data flow for advanced analytics. 🎯 Key Responsibilities: - Develop and maintain scalable data pipelines for batch and real-time processing - Deploy and manage cloud-based data solutions on AWS, Azure, or Google Cloud - Implement and optimize CI/CD pipelines for automated deployment and infrastructure management - Manage relational and NoSQL databases, ensuring performance, backup, and recovery - Oversee server provisioning, deployment, and monitoring of data infrastructure - Collaborate with AI/ML teams to streamline data workflows for model training and inference - Maintain security, data integrity, and compliance standards - Monitor system performance and troubleshoot issues proactively 💡 Ideal Candidate Profile: - 7–8+ years of experience in data engineering and infrastructure deployment - Strong expertise in PostgreSQL, MySQL, and MongoDB management and optimization - Hands-on experience with cloud platforms (AWS, GCP, Azure) and infrastructure-as-code - Proficiency with CI/CD tools like Jenkins, GitLab, or GitHub - Solid understanding of server/database administration in cloud/hybrid environments - Familiarity with AI/ML data workflows and MLOps is a plus - Strong scripting skills in Python - Excellent communication and problem-solving skills Join us to be part of a dynamic team pioneering data solutions in a cloud-first world! 🚀 share your cv to shilpi.vohra@petonic.in #DataEngineering #CloudComputing #AI #ML #DatabaseManagement #GurugramJobs #HiringNow
Posted 3 weeks ago
0.0 - 1.0 years
4 - 5 Lacs
Kanchipuram, Tamil Nadu
On-site
Job Title: Cloud Operations Engineer Experience: 0–1 year- Entry Level Location: Kanchipuram, Tamil Nadu Employment Type: Full-time (Rotational Shifts – 24x7) About the Role We’re hiring a Cloud Operations Engineer to join our growing infrastructure team. In this role, you’ll be responsible for monitoring, maintaining, and responding to incidents in our production cloud environment. You will play a key role in ensuring uptime, performance, and reliability of cloud-based systems across compute, networking, and storage. This is an ideal opportunity for candidates interested in the operational side of cloud infrastructure, incident response, and systems reliability, especially those with a passion for Linux, monitoring tools, and automation. Your Responsibilities: ● Monitor health and performance of cloud infrastructure using tools like Prometheus, Grafana, ELK, and Zabbix. ● Perform L1–L2 troubleshooting of compute, network, and storage issues. ● Respond to infrastructure alerts and incidents with a sense of urgency and ownership. ● Execute standard operating procedures (SOPs) for issue mitigation and escalation. ● Contribute to writing and improving incident response playbooks and runbooks. ● Participate in root cause analysis (RCA) and post-incident reviews. ● Automate routine operations using scripting and Infrastructure-as-Code (IaC) tools. Technical Skills – Nice to Have (Not All Required) We don’t expect you to have experience in every area. If you’re eager to learn and have a solid foundation in Linux or cloud, you're encouraged to apply — even if you're still gaining experience in some areas below: ● Operating Systems: Linux (Debian/Ubuntu/CentOS/Rockylinux) ● Monitoring & Logging: Prometheus, Grafana, ELK, Zabbix, Nagios ● Infrastructure Troubleshooting Tools: top, htop, netstat, iostat, tcpdump ● Networking: DNS, NAT, VPN, Load Balancers ● Cloud Services: VM provisioning, disk management, firewall rules ● Automation & Scripting: Bash, Python, Git ● IaC Tools: Ansible, Terraform (good to have) ● Incident Response & RCA: Familiarity with escalation procedures and documentation best practices You Should Be Someone Who: ● Pays strong attention to detail and can respond under pressure ● Has solid analytical and troubleshooting skills ● Is comfortable working in shifts and taking ownership of incidents ● Communicates clearly and collaborates well with cross-functional teams ● Is eager to learn cloud automation, reliability, and monitoring practices What You’ll Gain ● Hands-on experience in live cloud infrastructure operations ● Expertise in monitoring tools, alert handling, and system troubleshooting ● Real-world experience with DevOps practices, SOPs, and RCA processes ● Exposure to automation and Infrastructure-as-Code workflows About the Company E2E Networks Ltd. https://www.e2enetworks.com/ E2E Networks Limited is amongst India’s fastest growing pureplay SSD Cloud players. E2E Networks is the 6th largest IAAS platform in India. E2E Networks High Performance cloud platform can be accessed via self service portal at https://myaccount.e2enetworks.com where you can provision/manage and monitor Linux/Windows/GPU Cloud Machines with high performance CPU, large memory(RAM) or Smart Dedicated Compute featuring dedicated CPU cores. We began in 2009 as a contractless computing player targeting the value-conscious segment of customers especially startups. Before there were hyperscalers in India we were the premier choice of many of today’s soonicorns/Unicorns/well established businesses for Cloud Infrastructure. E2E Networks Cloud was used by many of successfully scaledup startups like Zomato/Cardekho/Cars24/Healthkart/Junglee Games/1mg/Team-BHP/Instant Pay/WishFin/Algolia/Intrcity(RailYatri)/Clovia/Groupon India (later crazeal/nearbuy), Jabong and Tapzo and many more to scale during a significant part of their journey from startup stage to multi-million DAUs ( Daily Active Users). In 2018, E2E Networks Ltd issued its IPO through NSE Emerge. Investors rushed and oversubscribed 70 times to the IPO, making it a huge success. Today, E2E Networks is the largest NSE listed Cloud Provider having served more than 10,000 customers and having thousands of active customers Our self‐service public cloud platform enables rapid deployment of compute workloads. We provide Cloud Solutions via control panel or API, this includes CDN,Load Balancers,Firewalls,VPC,DBaaS,Reserved IPv4,Object Storage, DNS/rDNS, Continuous Data Protection, One Click Installations and many more features. This results in lower project delivery costs by cutting down the delivery timelines. Our collaboration with NVIDIA allows us to play a significant role in helping our customers run their AI/ML training/inference, data science, NLP and computer vision workload pipelines. Job Type: Full-time Pay: ₹404,243.54 - ₹567,396.46 per year Schedule: Rotational shift Application Question(s): Do you have any basic hands-on experience or knowledge of Cloud Computing and Linux? Work Location: In person Speak with the employer +91 9354505633
Posted 3 weeks ago
5.0 - 8.0 years
0 Lacs
India
Remote
Solutions Architect – GenAI, AI/ML & AWS Cloud Architect the Future of AI with goML At goML, we design and build cutting-edge Generative AI, AI/ML, and Data Engineering solutions that help businesses unlock the full potential of their data, drive intelligent automation, and create transformative AI-powered experiences. Our mission is to bridge the gap between state-of-the-art AI research and real-world enterprise applications - helping organizations innovate faster, make smarter decisions, and scale AI solutions seamlessly. We’re looking for a Solutions Architect with deep expertise in designing AI/ML and GenAI architectures on AWS. In this role, you’ll be responsible for crafting scalable, high-performance, and cost-effective AI solutions - leveraging AWS AI/ML services, modern data infrastructure, and cloud-native best practices. If you thrive in architecting intelligent, data-driven systems and love solving complex technical challenges, we’d love to hear from you! Why You? Why Now? Generative AI is reshaping industries, and businesses are looking for scalable, cost-efficient, and production-ready AI/ML solutions. This role is perfect for someone who loves solutioning AI/ML workloads, optimizing cloud infrastructure, and working directly with clients to drive real-world impact. At goML, you will: Own the architecture and solutioning of AI/ML and GenAI applications Work with sales & engineering leaders to scope customer needs & build proposals Design scalable architectures using AWS services like SageMaker, Bedrock, Lambda, and Redshift Influence high-impact AI/ML decisions while working closely with the co-founders What You’ll Do (Key Responsibilities) First 30 Days: Foundation & Orientation Deep dive into goML’s AI/ML & GenAI solutions, architecture frameworks, and customer engagements Familiarize yourself with goML and AWS partnership workflows Work alongside sales leaders to understand customer pain points & proposal workflows Review and refine best practices for deploying AI/ML workloads on AWS Start contributing to solution architectures, and lead client discussions First 60 Days: Execution & Impact Own customer AI/ML solutioning, including LLMOps, inference optimization, and MLOps pipelines Collaborate with engineering teams to develop reference architectures & POCs for AI workloads Build strategies to optimize AI/ML model deployment, GPU utilization, and cost-efficiency in AWS Assist in sizing and optimizing AWS infrastructure for AI/ML-heavy workloads Work closely with customers to translate GenAI and AI/ML requirements into scalable architectures First 180 Days: Ownership & Transformation Lead AI/ML architectural decisions for complex enterprise-scale AI projects Optimize multi-cloud and hybrid AI/ML deployments across AWS, Azure, and GCP Mentor team members on best practices for GenAI & cloud AI deployments Define long-term strategies for AI-driven data platforms, model lifecycle management, and cloud AI acceleration Represent goML in technical conferences, blogs, and AI/ML meetups What You Bring (Qualifications & Skills) Must-Have: 5-8 years of experience designing AI/ML and data-driven architectures on AWS At least 2 years of hands-on experience in GenAI, LLMOps, or advanced AI/ML workloads Deep expertise in AWS AI/ML services (SageMaker, Bedrock, Lambda, Inferentia, Trainium) Strong knowledge of AWS Data Services (S3, Redshift, Glue, Lake Formation, DynamoDB) Experience in optimizing AI/ML inference, GPU utilization, and MLOps pipelines Excellent client-facing communication skills with experience in proposal writing Nice-to-Have: Familiarity with Azure ML, GCP Vertex AI, and NVIDIA AI/ML services Experience in LangChain, RAG architectures, and multi-modal AI models Knowledge of MLOps automation, CI/CD for AI models, and scaling inference workloads Meet Your Hiring Manager You’ll report to Prashanna Hanumantha Rao, who runs practice teams, engineering, delivery, and operations for goML Expect a high-autonomy, high-impact environment, working closely with the co-founders and senior leadership to drive AI/ML innovation at goML. Why Work With Us? Remote-first, with offices in Coimbatore for in-person collaboration Work on cutting-edge GenAI & AI/ML challenges at scale Direct impact on enterprise AI solutioning & technical strategy Competitive salary, leadership growth opportunities and ESOPs down the line
Posted 3 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Driffle: Driffle is a global digital goods marketplace specializing in digital gaming products, including games, gift cards, DLCs and more across 140 countries. We offer a convenient and diverse selection, from the newest release to timeless classics, all in one place. About the Role : We are seeking a highly analytical and detail-oriented Data Analyst to join our team. In this role, you will be instrumental in analysing our vast data sets to derive insights that drive decision-making across various departments including Business, Marketing, and Operations. Key Responsibilities: Collaborate with the tech and product team to define key performance indicators (KPIs) for the product. Design and implement robust metric tracking systems, ensuring consistent and accurate data collection. Design A/B tests and other experiments to validate product hypotheses. Analyse results to provide insights on product changes and their impact on user behaviour. Generate product insights from data analyses to suggest new features or improvements. Bridge the gap between data and product by translating findings into actionable product strategies. Dive deep into the data to determine the root causes of observed patterns or anomalies. Identify key drivers influencing metrics and advise on potential interventions. Conduct ad-hoc analyses to answer specific business questions or to inform product strategy. Create clear and impactful visualizations that convey complex data insights. Collaborate with stakeholders to communicate findings, provide recommendations, and influence product direction. Required Skills: Strong proficiency in Python or R. Demonstrated experience in SQL (experience with other data storage technologies is a plus). Mastery of statistical hypothesis testing, experiment design, and causal inference techniques. Hands-on experience with A/B and multivariate testing methodologies. Familiarity with behavioural data and experimentation platforms such as Mix panel, Growth Book. Strong visual, written, and verbal communication skills and ability to convey complex analytical results to non-technical audiences. Preferred Requirements: Familiarity with ML algorithms and techniques, including but not limited to regression models, clustering, and classification. Expertise in building, evaluating, and deploying ML models. Experience in e-commerce, marketplace business-models, proficiency in digital marketing.
Posted 3 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position: Quantitative Analyst(Neo Markets) Role Overview: We are seeking a highly motivated Quantitative researchers to join our team. The ideal candidate will be responsible for identifying trading alphas and conducting market analytics for algorithmic trading, with a primary focus on Futures and Options (F&O) markets, including high-frequency trading (HFT). The role involves analyzing both Indian and global market data to develop and implement data-driven trading strategies in F&O markets. The candidate should have a strong foundation in quantitative research, statistical modeling, and understanding financial market dynamics, particularly in derivatives. Additionally, proficiency in programming languages such as Python, R, C++, or C, along with SQL, is essential for handling large datasets, developing robust trading models, and ensuring efficient execution. Our focus is on individuals who are eager to develop and implement data-driven trading strategies, identify market inefficiencies, and leverage statistical models to optimize risk and performance. While you may not be expected to know everything listed here, we value a strong commitment to learning and mastering all aspects of the role. We are looking for critical thinkers with robust quantitative skills and a collaborative mindset to tackle complex trading challenges. Your contributions will play a key role in driving our vision of becoming the leading quant-driven trading desk in the country. Job Responsibilities: The position holder shall be responsible for: • Develop and refine trading strategies using quantitative research and statistical techniques. • Analyze market data to identify inefficiencies and trading opportunities. • Design, implement, and optimize systematic trading models and execution algorithms. • Backtest and validate models to ensure performance and robustness. • Apply machine learning and predictive modeling to enhance strategy development. • Work with high-dimensional datasets and advanced data analysis techniques. • Conduct hypothesis testing and statistical inference to validate strategies. • Build and maintain real-time algorithmic trading platforms, focusing on low-latency optimization. • Develop and improve execution algorithms for market-making and arbitrage strategies. • Collaborate with senior management, traders, and researchersto refine trading models. • Oversee systematic trading infrastructure testing, including automation QA. • Monitor global financial events and analyze their impact on market movements. • Track corporate actions and related investment opportunities. • Assist in portfolio construction, risk management, and strategy execution. • Develop risk monitoring tools and statistical models for real-time decision-making. • Prepare reports and presentations for internal discussions. Required Skills & Qualifications: • Proficiency in Python, R, C++, or C. • Strong SQL skills for data manipulation. • Solid understanding of factor models, portfolio optimization, and risk management. • Strong background in time series analysis. • Experience with ML, deep learning, and predictive analytics. • Ability to work with large datasets. • Strong analytical and problem-solving skills. • Eagerness to learn in algorithmic trading and quantitative finance
Posted 3 weeks ago
2.0 - 3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
If solving business challenges drives you. This is the place to be. Fornax is a team of cross-functional individuals who solve critical business challenges using core concepts of analytics, critical thinking. We are seeking a skilled Data Scientist who has worked in the Marketing domain. The ideal candidate will possess a strong blend of statistical expertise and business acumen, particularly in Marketing Mix Modeling (MMM), Causality Analysis, and Marketing Incrementality. Good understanding of the entire marketing value chain and measurement strategies. The Data Scientist will play a critical role in developing advanced analytical solutions to measure marketing effectiveness, optimize marketing spend, and drive data-driven decision making. This role involves working closely with marketing teams, analysts, and business stakeholders to deliver actionable insights through statistical modeling and experimentation. The ideal candidate has a strong background in statistical analysis, causal inference, and marketing analytics. Responsibilities : Modeling & Analysis (70%) : Develop and maintain Marketing Mix Models (MMM) to measure the effectiveness of marketing channels and campaigns Design and implement causal inference methodologies to identify true incremental impact of marketing activities Build attribution models to understand customer journey and touchpoint effectiveness Conduct advanced statistical analysis including regression, time series, and Bayesian methods Develop predictive models for customer behavior, campaign performance, and ROI optimization Create experimental designs for A/B testing and incrementality studies Perform promotional analysis to measure lift, cannibalization, and optimal discount strategies across products and channels Stakeholder Management & Collaboration ( 30% ) : Partner with business teams to understand business objectives and analytical needs Translate complex statistical findings into actionable business recommendations Present analytical insights and model results to non-technical stakeholders Collaborate with data engineers to ensure data quality and availability for modeling Work with business teams to design and implement measurement strategies Create documentation and knowledge transfer materials for analytical methodologies Key Qualifications Education: Bachelor's degree in Statistics, Economics, Mathematics, Computer Science, or related quantitative field. Master's degree preferred. Experience: 2-3 years of experience as a Data Scientist with focus on marketing analytics. Technical Skills: Strong proficiency in Python or R for statistical analysis Expertise in statistical modeling techniques (regression, time series, Bayesian methods) Experience with Marketing Mix Modeling (MMM) frameworks and tools Knowledge of causal inference methods (DiD, IV, RDD, Synthetic Controls) Proficiency in SQL for data manipulation and analysis Understanding of machine learning algorithms and their applications Deep understanding of marketing channels and measurement strategies Familiarity with marketing metrics (CAC, LTV, ROAS, etc.) Understanding of media planning and optimization concepts
Posted 3 weeks ago
5.0 - 31.0 years
17 Lacs
Hyderabad
On-site
Job Title: Senior Machine Learning Engineer Experience Required: 7–10 years Employment Type: Full-Time Education: Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related quantitative field Role Overview: We are looking for a highly skilled and innovative Senior Machine Learning Engineer to join our team. This role involves designing, developing, and maintaining scalable machine learning models and solutions, while collaborating with cross-functional teams to ensure alignment with business and technical objectives. Key Responsibilities: Design, develop, test, and deploy scalable machine learning models and pipelines for high-impact business use cases. Collaborate with data scientists, product managers, software engineers, and ML engineers to ensure successful implementation and integration of ML solutions. Perform research, experimentation, customization, and evaluation of ML algorithms; conduct training, tuning, testing, and deployment. Work with large-scale structured and unstructured data to build and continuously improve advanced ML models. Maintain, update, and monitor existing ML systems for performance and relevance. Lead development efforts for complex products and platforms, including architecture, analysis, coding, testing, and deployment of robust cloud-native solutions. Develop and maintain ML pipelines that support both batch and real-time inference. Drive innovation through research and evaluation of emerging tools, technologies, and frameworks in machine learning and data engineering. Mentor and guide junior engineers and team members. Represent the engineering team in internal and external technical forums. Provide critical feedback on product design and testing strategies to ensure best-in-class delivery. Ensure code quality and compliance with cloud and software engineering standards. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 7–10 years of hands-on experience in ML engineering, with a proven track record of production-level deployments. Expertise in Python; working knowledge of Golang is a plus. Experience building and deploying ML models in real-world applications and integrating them with enterprise systems. Strong understanding of microservices architecture, containerization, and orchestration platforms like Kubernetes. Solid experience with cloud-based development and deployment practices (e.g., AWS, Azure, GCP). Knowledge of distributed systems and cloud subsystems design. Skills & Competencies: Deep understanding of machine learning principles, statistical modeling, and data processing workflows. Experience with ML frameworks (TensorFlow, PyTorch, Scikit-learn) and data pipeline tools (e.g., Airflow, Kafka). Experience with containerization (Docker) and orchestration (Kubernetes). Strong problem-solving and analytical skills. Ability to write clean, efficient, and testable code. Excellent verbal and written communication skills. Strong leadership, mentoring, and collaboration abilities. If you're passionate about solving complex problems at scale using machine learning, and thrive in a dynamic, fast-paced environment—this is the opportunity for you.
Posted 3 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Internship Summary: We are offering an exciting Full Stack Engineer Internship for aspiring developers who want to build impactful applications that contribute to modernizing e-governance in India. This role provides a unique opportunity to gain practical experience in both front-end and back-end development, contributing to the creation of a user-friendly and multilingual platform. You will work on real-world projects, integrating advanced AI capabilities into the overall system architecture to deliver seamless public services. What You will Do: As a Full Stack Engineer Intern, you will work closely with our engineering team, contributing to the development of our integrated AI platform. Your responsibilities may include: Multilingual User Interface Development: Assist in developing and optimizing dynamic, interactive, and bilingual user interface components using modern web frameworks (e.g., React, Vue, Angular). Backend API Development: Contribute to developing and maintaining backend APIs using Python frameworks (e.g., FastAPI, Flask) that integrate with our core AI components. Data Management & Integration: Help with database management (e.g., PostgreSQL) for storing digitized text, embeddings, metadata, and application data. Support the integration of various system components, ensuring efficient data exchange with existing government software (HRMS, budgeting, case management) via custom APIs. System Performance & Responsiveness: Learn about optimizing retrieval and LLM inference for quick demonstrations, ensuring the pilot system feels responsive. Deployment Support: Participate in the deployment process, including containerization (Docker) for consistent environments and cloud-based deployments Collaboration & Documentation: Actively participate in team discussions, contribute to architectural discussions, and maintain clear documentation of development processes and system functionalities. Who Can Apply: Currently pursuing or recently completed a Bachelor's or Master's degree in Computer Science, Data Science, Artificial Intelligence, Machine Learning from a Tier 1 or Tier-2 Colleges / Autonomous Institutions. Strong foundational understanding of web development concepts, including front-end and back-end technologies. Proficiency in at least one programming language, preferably Python is mandatory Familiarity with modern web frameworks (e.g., React, Vue, Angular) is a plus. Basic understanding of databases (e.g., PostgreSQL) is preferred. Eagerness to learn and a proactive approach to tasks, with an "MVP mindset" to prioritize core functionality. Strong problem-solving abilities and attention to detail. Ability to work independently and as part of a collaborative team. A strong desire to learn, adapt, and contribute in a fast-paced environment focused on innovation.
Posted 3 weeks ago
0 years
12 - 20 Lacs
Mumbai Metropolitan Region
On-site
Role Overview As a Backend Developer at LearnTube.ai, you will ship the backbone that powers 2.3 million learners in 64 countries—owning APIs that crunch 1 billion learning events & the AI that supports it with <200 ms latency. What You'll Do At LearnTube, we’re pushing the boundaries of Generative AI to revolutionise how the world learns. As a Backend Engineer, you will be building the backend for an AI system and working directly on AI. Your roles and responsibilities will include: Ship Micro-services – Build FastAPI services that handle ≈ 800 req/s today and will triple within a year (sub-200 ms p95). Power Real-Time Learning – Drive the quiz-scoring & AI-tutor engines that crunch millions of events daily. Design for Scale & Safety – Model data (Postgres, Mongo, Redis, SQS) and craft modular, secure back-end components from scratch. Deploy Globally – Roll out Dockerised services behind NGINX on AWS (EC2, S3, SQS) and GCP (GKE) via Kubernetes. Automate Releases – GitLab CI/CD + blue-green / canary = multiple safe prod deploys each week. Own Reliability – Instrument with Prometheus / Grafana, chase 99.9 % uptime, trim infra spend. Expose Gen-AI at Scale – Publish LLM inference & vector-search endpoints in partnership with the AI team. Ship Fast, Learn Fast – Work with founders, PMs, and designers in weekly ship rooms; take a feature from Figma to prod in What makes you a great fit? Must-Haves 2+ yrs Python back-end experience (FastAPI) Strong with Docker & container orchestration Hands-on with GitLab CI/CD, AWS (EC2, S3, SQS) or GCP (GKE / Compute) in production SQL/NoSQL (Postgres, MongoDB) + You’ve built systems from scratch & have solid system-design fundamentals Nice-to-Haves k8s at scale, Terraform, Experience with AI/ML inference services (LLMs, vector DBs) Go / Rust for high-perf services Observability: Prometheus, Grafana, OpenTelemetry About Us At LearnTube, we’re on a mission to make learning accessible, affordable, and engaging for millions of learners globally. Using Generative AI, we transform scattered internet content into dynamic, goal-driven courses with: AI-powered tutors that teach live, solve doubts in real time, and provide instant feedback. Seamless delivery through WhatsApp, mobile apps, and the web, with over 1.4 million learners across 64 countries. Meet The Founders LearnTube was founded by Shronit Ladhani and Gargi Ruparelia, who bring deep expertise in product development and ed-tech innovation. Shronit, a TEDx speaker, is an advocate for disrupting traditional learning, while Gargi’s focus on scalable AI solutions drives our mission to build an AI-first company that empowers learners to achieve career outcomes. We’re proud to be recognised by Google as a Top 20 AI Startup and are part of their 2024 Startups Accelerator: AI First Program, giving us access to cutting-edge technology, credits, and mentorship from industry leaders. Why Work With Us? Role At LearnTube, we believe in creating a work environment that’s as transformative as the products we build. Here’s why this role is an incredible opportunity: Cutting-Edge Technology: You’ll work on state-of-the-art generative AI applications, leveraging the latest advancements in LLMs, multimodal AI, and real-time systems. Autonomy and Ownership: Experience unparalleled flexibility and independence in a role where you’ll own high-impact projects from ideation to deployment. Rapid Growth: Accelerate your career by working on impactful projects that pack three years of learning and growth into one. Founder and Advisor Access: Collaborate directly with founders and industry experts, including the CTO of Inflection AI, to build transformative solutions. Team Culture: Join a close-knit team of high-performing engineers and innovators, where every voice matters, and Monday morning meetings are something to look forward to. Mission-Driven Impact: Be part of a company that’s redefining education for millions of learners and making AI accessible to everyone. Skills:- Python, FastAPI, Amazon Web Services (AWS), MongoDB, CI/CD, Docker and Kubernetes
Posted 3 weeks ago
4.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About The Opportunity Our client is expanding the software product engineering team for its partner, a US-based SaaS platform company specializing in autonomous security solutions. Their partner's platform leverages advanced AI to detect, mitigate, and respond to cyber threats across enterprise infrastructures. By offering comprehensive visibility, deep cognition, effective detection, thorough root cause analysis, and high-precision control, they aim to transform traditional governance, risk, and compliance (GRC) workflows into fast and scalable AI-native processes. Responsibilities Model Development and Integration: Design, implement, test, integrate and deploy scalable machine learning models, integrating them into production systems and APIs to support existing and new customers. Experimentation and Optimization: Lead the design of experiments and hypothesis testing for product feature development; monitor and analyze model performance and data accuracy, making improvements as needed. Cross-Functional Collaboration: Work closely with cross-functional teams across India and the US to identify opportunities, deploy impactful solutions, and effectively communicate findings to both technical and non-technical stakeholders. Mentorship and Continuous Learning: Mentor junior team members, contribute to knowledge sharing, and stay current with best practices in data science, machine learning, and AI. Requirements & Qualifications Bachelors or Masters in Statistics, Mathematics, Computer Science, Engineering, or a related quantitative field. 4-7 years of experience building and deploying machine learning models. Strong problem-solving skills with an emphasis on product development. Experience operating and troubleshooting scalable machine learning systems in the cloud. Technical Skills Programming and Frameworks: Proficient in Python with experience in TensorFlow, PyTorch, scikit-learn, and Pandas; familiarity with Golang is a plus; proficient with Git and collaborative workflows. Software Engineering: Strong understanding of data structures, algorithms, and system design principles; experience in designing scalable, reliable, and maintainable systems. Machine Learning Expertise: Extensive experience in AI and machine learning model development, including large language models, transformers, sequence models, causal inference, unsupervised clustering, and reinforcement learning. Knowledge of prompting techniques, embedding models and RAG. Innovation in Machine Learning: Ability to design and conceive novel ways of problem solving using new machine learning models. Integration, Deployment, and Cloud Services: Experience integrating machine learning models into backend systems and APIs; familiarity with Docker, Kubernetes, CI/CD tools and Cloud Services like AWS/Azure/GCP for efficient deployment. Data Management and Security: Proficient with SQL and experience with PostgreSQL; knowledge of NoSQL databases; understanding of application security and data protection principles. Methodologies And Tools Agile/Scrum Practices: Experience with Agile/Scrum methodologies. Project Management Tools: Proficiency with Jira, Notion, or similar tools. Soft Skills Excellent communication and problem-solving abilities. Ability to work independently and collaboratively. Strong organizational and time management skills. High degree of accountability and ownership. Nice-to-Haves Experience with big data tools like Hadoop or Spark. Familiarity with infrastructure management and operations lifecycle concepts. Experience working in a startup environment. Contributions to open-source projects or a strong GitHub portfolio. Benefits Comprehensive Insurance (Life, Health, Accident). Flexible Work Model. Accelerated learning & non-linear growth. Flat organization structure driven by ownership and accountability. Opportunity to own and be a part of some of the most innovative and promising AI/SaaS product companies in North America and around the world. Accomplished Global Peers - Working with some of the best engineers/professionals globally from the likes of Apple, Amazon, IBM Research, Adobe and other innovative product companies . Ability to make a global impact with your work, leading innovations in Conversational AI, Energy/Utilities, ESG, HealthTech, IoT, Risk/Compliance, CyberSecurity, PLM and more. Skills: api,azure,jira,machine learning models,aws,golang,docker,product development,machine learning,git,postgresql,python,github,scikit-learn,tensorflow,nosql,ci/cd,kubernetes,gcp,pytorch,pandas,spark,sql,hadoop
Posted 3 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Software Engineering General Summary As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Job Location: Hyderabad More Details Below About the team: Join the growing team at Qualcomm focused on advancing state-of-the-art in Machine Learning. The team uses Qualcomm chips’ extensive heterogeneous computing capabilities to allow inference of trained neural networks on-device without a need for connection to the cloud. Our inference engine is designed to help developers run neural network models trained in a variety of frameworks on Snapdragon platforms at blazing speeds while still sipping the smallest amount of power. See your work directly impact billions of devices around the world. Responsibilities In this position, you will be responsible for the development and commercialization of ML solutions like Snapdragon Neural Processing Engine (SNPE) SDK on Qualcomm SoCs. You will be developing various SW features in our ML stack. You would be porting AI/ML solutions to various platforms and optimize the performance on multiple hardware accelerators (like CPU/GPU/NPU). You will have expert knowledge in deployment aspects of large software C/C++ dependency stacks using best practices. You will also have to keep up with the fast-paced development happening in the industry and academia to continuously enhance our solution from software engineering as well as machine learning standpoint. Work Experience 7-9 years of relevant work experience in software development. Live and breathe quality software development with excellent analytical and debugging skills. Strong understanding about Processor architecture, system design fundamentals. Experience with embedded systems development or equivalent. Strong development skills in C and C++. Excellent communication skills (verbal, presentation, written). Ability to collaborate across a globally diverse team and multiple interests. Preferred Qualifications Experience in embedded system development. Experience in C, C++, OOPS and Design patterns. Experience in Linux kernel or driver development is a plus. Strong OS concepts. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3077875
Posted 3 weeks ago
2.0 years
1 - 8 Lacs
Hyderābād
On-site
Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: Job Description Join the exciting Generative AI team at Qualcomm focused on integrating cutting edge GenAI models on Qualcomm chipsets. The team uses Qualcomm chips’ extensive heterogeneous computing capabilities to allow inference of GenAI models on-device without a need for connection to the cloud. Our inference engine is designed to help developers run neural network models trained in a variety of frameworks on Snapdragon platforms at blazing speeds while still sipping the smallest amount of power. Utilize this power efficient hardware and Software stack to run Large Language Models (LLMs) and Large Vision Models (LVM) at near GPU speeds! Responsibilities: In this role, you will spearhead the development and commercialization of the Qualcomm AI Runtime (QAIRT) SDK on Qualcomm SoCs. As an AI inferencing expert, you'll push the limits of performance from large models. Your mastery in deploying large C/C++ software stacks using best practices will be essential. You'll stay on the cutting edge of GenAI advancements, understanding LLMs/Transformers and the nuances of edge-based GenAI deployment. Most importantly, your passion for the role of edge in AI's evolution will be your driving force. Requirements: Master’s/Bachelor’s degree in computer science or equivalent. 2-4 years of relevant work experience in software development. Strong understanding of Generative AI models – LLM, LVM, LMMs and building blocks (self-attention, cross attention, kv caching etc.) Floating-point, Fixed-point representations and Quantization concepts. Experience with optimizing algorithms for AI hardware accelerators (like CPU/GPU/NPU). Strong in C/C++ programming, Design Patterns and OS concepts. Good scripting skills in Python. Excellent analytical and debugging skills. Good communication skills (verbal, presentation, written). Ability to collaborate across a globally diverse team and multiple interests. Preferred Qualifications Strong understanding of SIMD processor architecture and system design. Proficiency in object-oriented software development and familiarity Familiarity with Linux and Windows environment Strong background in kernel development for SIMD architectures. Familiarity with frameworks like llama.cpp, MLX, and MLC is a plus. Good knowledge of PyTorch, TFLite, and ONNX Runtime is preferred. Experience with parallel computing systems and languages like OpenCL and CUDA is a plus. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 3 weeks ago
4.0 - 8.0 years
2 - 3 Lacs
Chennai
On-site
The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems Apply fundamental knowledge of programming languages for design specifications. Analyze applications to identify vulnerabilities and security issues, as well as conduct testing and debugging Serve as advisor or coach to new or lower level analysts Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Key Responsibilities: Design and implement ETL pipelines using PySpark and Big Data tools on platforms like Hadoop, Hive, HDFS etc. Write scalable Python code for Machine Learning preprocessing tasks and work with libraries such as pandas, Scikit-learn etc. Develop data pipelines to support model training, evaluation and inference. Skills: Proficiency in Python programming with experience in PySpark for large-scale data processing. Hands-on experience in Big Data technologies: Hadoop, Hive HDFS etc. Exposure to machine learning workflows, model lifecycle and data preparation. Experience with ML libraries: Scikit-learn, XGBoost, Tensorflow, PyTorch etc. Exposure to cloud platforms (AWS/GCP) for data and AI workloads. Qualifications: 4-8 years of relevant experience in the Financial Service industry Intermediate level experience in Applications Development role Consistently demonstrates clear and concise written and verbal communication Demonstrated problem-solving and decision-making skills Ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. - Job Family Group: Technology - Job Family: Applications Development - Time Type: Full time - Most Relevant Skills Please see the requirements listed above. - Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. - Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 3 weeks ago
4.0 years
6 - 9 Lacs
Bengaluru
On-site
JOB DESCRIPTION Join us to enhance credit card acquisition strategies and drive sustainable growth. As a Quant Analytics Associate within the Card Data and Analytics team, you will leverage your expertise in data engineering, analysis, and modeling to advance our credit card acquisition strategies. You will develop predictive models, create data assets, and refine processes to support acquisitions forecasts and offers. This individual contributor role is integral to driving the growth of our credit card portfolio through quantitative methods and data exploration. Job Responsibilities: Provide tactical support and strategic oversight to Product, Marketing, Finance, and Risk teams for credit card acquisitions. Develop actionable data-driven insights for marketing campaigns. Leverage and develop data assets to improve acquisitions forecast quality. Support business goals by developing reports for senior leaders to monitor key performance metrics. Enhance efficiency and effectiveness by identifying and closing gaps in processes and systems. Ensure business continuity by adopting standards and best practices. Stay current with industry trends and emerging technologies. Required Qualifications, Capabilities, and Skills: Degree in a quantitative discipline (e.g., engineering, mathematics, computer science). 4+ years of experience in data/decision science, forecasting, data management/engineering, and business intelligence. Proficiency in data ETL, analysis, visualization, and change management using tools like Snowflake, SAS, Python, R, Alteryx, Tableau, GitHub, Excel, PowerPoint. Ability to communicate clearly to audiences of varying technical levels. Preferred Qualifications, Capabilities, and Skills: Experience with causal inference and machine learning techniques, including developing and deploying quantitative models. Professional experience in consumer banking, lending, or similarly regulated industries. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM Our Consumer & Community Banking division serves our Chase customers through a range of financial services, including personal banking, credit cards, mortgages, auto financing, investment advice, small business loans and payment processing. We’re proud to lead the U.S. in credit card sales and deposit growth and have the most-used digital solutions – all while ranking first in customer satisfaction. The CCB Data & Analytics team responsibly leverages data across Chase to build competitive advantages for the businesses while providing value and protection for customers. The team encompasses a variety of disciplines from data governance and strategy to reporting, data science and machine learning. We have a strong partnership with Technology, which provides cutting edge data and analytics infrastructure. The team powers Chase with insights to create the best customer and business outcomes.
Posted 3 weeks ago
2.0 years
30 Lacs
Vellore, Tamil Nadu, India
Remote
Experience : 2.00 + years Salary : INR 3000000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: LLM, Kubernetes, Machine Learning, Python, LLM finetuning, Deployment, Machine Learning Framework Yugen AI is Looking for: We are looking for a talented LLMOps Engineer to design, deploy, and operationalise agentic solutions for fraud investigations. This is critical to reducing fraud investigations TAT (turn-around time) by more than 70%. In this role, you will work directly with our CTO, Soumanta Das , as well as a team of 5 engineers (Backend Engineers, Data Engineers, Platform Engineers). Responsibilities Deploy and scale LLM inference workloads on Kubernetes (K8s) with 99.9% uptime. Build agentic tools and services for fraud investigations with complex reasoning capabilities. Work with Platform Engineers to set up monitoring and observability (e.g., Prometheus, Grafana) to track model performance and system health. Fine-tune open-source LLMs using TRL or similar libraries. Use Terraform for infrastructure-as-code to support scalable ML deployments. Contribute to Tech blogs, especially technical deep dives of the latest research in the field of reasoning. Requirements Strong programming skills (Python, etc.) and problem-solving abilities. Hands-on experience with open-source LLM inference and serving frameworks such as vLLM. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Some familiarity with fine-tuning and deploying open-source LLMs using GRPO, TRL, or similar frameworks. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Familiarity with/Knowledge of high-availability systems. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
2.0 years
30 Lacs
Coimbatore, Tamil Nadu, India
Remote
Experience : 2.00 + years Salary : INR 3000000.00 / year (based on experience) Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: LLM, Kubernetes, Machine Learning, Python, LLM finetuning, Deployment, Machine Learning Framework Yugen AI is Looking for: We are looking for a talented LLMOps Engineer to design, deploy, and operationalise agentic solutions for fraud investigations. This is critical to reducing fraud investigations TAT (turn-around time) by more than 70%. In this role, you will work directly with our CTO, Soumanta Das , as well as a team of 5 engineers (Backend Engineers, Data Engineers, Platform Engineers). Responsibilities Deploy and scale LLM inference workloads on Kubernetes (K8s) with 99.9% uptime. Build agentic tools and services for fraud investigations with complex reasoning capabilities. Work with Platform Engineers to set up monitoring and observability (e.g., Prometheus, Grafana) to track model performance and system health. Fine-tune open-source LLMs using TRL or similar libraries. Use Terraform for infrastructure-as-code to support scalable ML deployments. Contribute to Tech blogs, especially technical deep dives of the latest research in the field of reasoning. Requirements Strong programming skills (Python, etc.) and problem-solving abilities. Hands-on experience with open-source LLM inference and serving frameworks such as vLLM. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Some familiarity with fine-tuning and deploying open-source LLMs using GRPO, TRL, or similar frameworks. Deep expertise in Kubernetes (K8s) for orchestrating LLM workloads. Familiarity with/Knowledge of high-availability systems. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France