Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About The Role: Senior Artificial Intelligence Engineer - Gen AI We are looking for a high-energy tech-savvy individual for the role of Senior AI Engineer- Gen AI at Eucloid. The candidate will solve complex problems, picking up new technologies and working at the cutting edge of innovation. The candidate is expected to lead following work-streams: Develop and optimize Python applications for AI-driven products Build and integrate LLM-based applications with structured and unstructured data Work with large datasets, AI pipelines, and model inference APIs Collaborate with AI researchers and engineers to translate ideas into production-ready solutions Stay ahead of emerging AI technologies and contribute to continuous innovation Ideal candidate will have following Background and Skills: Undergraduate Degree in any quantitative discipline such as engineering, science or economics from a Top-Tier institution. 5+ years of experience in Python-based product development Strong foundation in data structures, algorithms, and system design Proven problem-solving ability– you love tackling complex technical challenges Hands-on experience with APIs, backend development, and cloud platforms (AWS/GCP/Azure) Experience in AI, ML, or NLP is a bonus, but a strong aptitude to learn AI frameworks quickly is what matters most Exposure to Lang Chain, OpenAI APIs, or vector databases (advantageous, but not mandatory) The role offers a very attractive compensation (Base+ Bonus) and is based out of Gurugram. Please reach out to chandershekhar.verma@eucloid.com if you want to apply .
Posted 2 weeks ago
4.0 years
4 - 8 Lacs
Gurgaon
On-site
About Us We turn customer challenges into growth opportunities. Material is a global strategy partner to the world’s most recognizable brands and innovative companies. Our people around the globe thrive by helping organizations design and deliver rewarding customer experiences. We use deep human insights, design innovation and data to create experiences powered by modern technology. Our approaches speed engagement and growth for the companies we work with and transform relationships between businesses and the people they serve. Srijan, a Material company, is a renowned global digital engineering firm with a reputation for solving complex technology problems using their deep technology expertise and leveraging strategic partnerships with top-tier technology partners Job Title: Senior/Lead Data Scientist Experience Required: 4 + Years About the Role: We are seeking a skilled and innovative Machine Learning Engineer with 4+ years of experience to join our AI/ML team. The ideal candidate will have strong expertise in Computer Vision , Generative AI (GenAI) , and Deep Learning , with a proven track record of deploying models in production environments using Python, MLOps best practices, and cloud platforms like Azure ML. Key Responsibilities: Design, develop, and deploy AI/ML models for Computer Vision and GenAI use cases Build, fine-tune, and evaluate deep learning architectures (CNNs, Transformers, Diffusion models, etc.) Collaborate with product and engineering teams to integrate models into scalable pipelines and applications Manage the complete ML lifecycle using MLOps practices (versioning, CI/CD, monitoring, retraining) Develop reusable Python modules and maintain high-quality, production-grade ML code Work with Azure Machine Learning Services for training, inference, and model management Analyze large-scale datasets, extract insights, and prepare them for model training and validation Document technical designs, experiments, and decision-making processes Required Skills & Experience: 4–5 years of hands-on experience in Machine Learning and Deep Learning Strong experience in Computer Vision tasks such as object detection, image segmentation, OCR, etc. Practical knowledge and implementation experience in Generative AI (LLMs, diffusion models, embeddings) Solid programming skills in Python , with experience using frameworks like PyTorch , TensorFlow , OpenCV , Transformers (HuggingFace) , etc. Good understanding of MLOps concepts , model deployment, and lifecycle management Experience with cloud platforms , preferably Azure ML , for scalable model training and deployment Familiarity with data labeling tools, synthetic data generation, and model interpretability Strong problem-solving, debugging, and communication skills Good to Have: Experience with NLP , multimodal learning , or 3D computer vision Familiarity with containerization tools (Docker, Kubernetes) Experience in building end-to-end ML pipelines using MLflow, DVC, or similar tools Exposure to CI/CD pipelines for ML projects and working in agile development environments Education: Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Data Science , or a related field
Posted 2 weeks ago
5.0 years
2 - 9 Lacs
Noida
On-site
Posted On: 14 Jul 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description We are seeking a Data Science Engineer to design, build, and optimize scalable data and machine learning systems. This role requires strong software engineering skills, a deep understanding of data science workflows, and the ability to work cross-functionally to translate business problems into production-level data solutions. Key Responsibilities: Design, implement, and maintain data science pipelines from data ingestion to model deployment. Collaborate with data scientists to operationalize ML models and algorithms in production environments. Develop robust APIs and services for ML model inference and integration. Build and optimize large-scale data processing systems using Spark, Pandas, or similar tools. Ensure data quality and pipeline reliability through rigorous testing, validation, and monitoring. Work with cloud infrastructure (AWS) for scalable ML deployment. Manage model versioning, feature engineering workflows, and experiment tracking. Optimize performance of models and pipelines for latency, cost, and throughput. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of experience in a data science, ML engineering, or software engineering role. Proficiency in Python (preferred) and SQL; knowledge of Java, Scala, or C++ is a plus. Experience with data science libraries like Scikit-learn, XGBoost, TensorFlow, or PyTorch. Familiarity with ML deployment tools such as ML flow, Sage Maker, or Vertex AI. Solid understanding of data structures, algorithms, and software engineering best practices. Experience working with databases (SQL, NoSQL) and data lakes (e.g., Delta Lake, Big Query). Preferred Qualifications: Experience with containerization and orchestration (Docker, Kubernetes). Experience working in Agile or cross-functional teams. Familiarity with streaming data platforms (Kafka, Spark Streaming, Flink). Soft Skills: Strong communication skills to bridge technical and business teams. Excellent problem-solving and analytical thinking. Self-motivated and capable of working independently or within a team. Passion for data and a curiosity-driven mindset. Mandatory Competencies Data Science and Machine Learning - Data Science and Machine Learning - AI/ML Database - Database Programming - SQL Programming Language - Python - Panda Cloud - GCP - Cloud Data Fusion, Dataproc, BigQuery, Cloud Composer, Cloud Bigtable Database - No SQL - Elastic, Solr, Lucene etc Programming Language - Other Programming Language - Scala Agile - Agile - SCRUM DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes) Cloud - AWS - Tensorflow on AWS, AWS Glue, AWS EMR, Amazon Data Pipeline, AWS Redshift Beh - Communication and collaboration Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
About Us The name Interview Kickstart might have given you a clue. But here's the 30-second elevator pitch - Interviews can be challenging. And when it comes to the top tech companies like Google, Apple, Facebook, Netflix, etc., they can be downright brutal. Most candidates don't make it simply because they don't prepare well enough. Interview Kickstart (IK) helps candidates nail the most challenging tech interviews. To keep up with the upcoming trends, we are launching our new Agentic AI course. Requirements Technical Expertise in at least one of the following topics: Fundamentals of Agentic AI Key AI Agent Frameworks: AutoGen, LangChain, CrewAI, LangGraph Understanding Multi-Agent Systems: Reflection, Planning, and Task Automation Introduction to ReAct (Reasoning + Action) Framework Prompt Engineering & Function Calling Building a simple AI Agent (Code-based for SWEs / Low Code for other Tech Domains) Develop a modular, AI agent using LangGraph or CrewAI capable of reasoning, decision-making, and tool usage Understand the role of graph-based agent workflows (LangGraph) and multi-agent collaboration (CrewAI) Deploy an interactive AI assistant that can execute tasks autonomously Building Applications with LLMs & Agents (Advanced) AI Agent Memory & Long-Term Context Multi-Agent Collaboration & Orchestration Deployment (LLMOps, Langchain, LlamaIndex etc.) Emerging trends: LLMOps, guardrails, and multi-agent systems Evaluation & Optimizing AI Agents: Performance & Cost Efficiency AI Agent Performance Monitoring & Logging Optimizing Inference Speed & Model Costs Fine-Tuning vs. Prompt Engineering Trade-offs Evaluating Agent Effectiveness with Human Feedback Designing Robust and Scalable AI Systems for Modern Applications (For SWEs) Introduction to AI system design: scalability, reliability, performance, and cost optimization Common design patterns for AI applications: pipeline, event-driven, and microservices System architecture for LLM applications: inference engine, data pipeline, API layer, and frontend integration AI-specific challenges: managing large datasets, optimizing latency, and handling model updates Advanced topics: LLMOps, multi-model orchestration, and AI system security Evaluating AI systems: throughput, reliability, accuracy, and cost-efficiency Building Advanced Agents (Code-based for SWEs) Build an Advanced Horizontal Multi-Agent System for a use cases like: A multi-agent system that automates DevOps workflows, including CI/CD monitoring, infrastructure scaling, and system health diagnostics A multi-agent AI healthcare assistant to automate medical FAQs, appointment scheduling, and patient history retrieval An agentic system that analyzes application security risks, detects vulnerabilities, and suggests fixes An AI-powered multi-agent system that analyzes, summarizes, and extracts key insights from legal documents An AI-driven multi-agent system for supply chain automation, handling inventory management, demand forecasting, and logistics tracking Agentic AI For PMs Build Agentic AI system using low code / no code tools for use cases relevant to PMs such as: AI-Powered Feature Prioritization Tool Customer Sentiment Analysis & Roadmap Alignment AI-Driven Competitive Landscape Analysis Agentic AI For TPMs Build Agentic AI system using low code / no code tools for use cases relevant to TPMs such as: AI-Powered Stakeholder Management Bot Multi-Agent AI System for Program Risk Management AI-Driven Engineering Capacity & Resource Allocation Agent Agentic AI For EMs Build Agentic AI system using low code / no code tools for use cases relevant to TPMs such as: Multi-Agent System for Engineering Productivity & Burnout Monitoring Multi-Agent AI System for Engineering Roadmap & Strategy Planning AI Agent for Cloud Cost Optimization in Engineering Workloads Preferred Qualifications: Prior experience building and deploying LLM or agent-based applications in real-world settings Strong proficiency with agent frameworks like LangGraph, CrewAI, or LangChain (code-based for SWEs and low-code or no-code for Tech Domains) Strong understanding of system design principles, especially in AI/ML-based architectures Demonstrated ability to explain complex technical topics to diverse audiences Experience teaching, mentoring, or creating content for working professionals in tech Excellent communication and collaboration skills, with a learner-first mindset Bonus: Contributions to open-source AI projects, publications, or prior experience with AI upskilling programs Responsibilities: Instruction Delivery: Conduct lectures, workshops, and interactive sessions to teach machine learning principles, algorithms, and methodologies. Instructors may use various teaching methods, including lectures, demonstrations, hands-on exercises, and group discussions Industry Engagement: Staying current with the latest trends and advancements in machine learning and related fields, engaging with industry professionals, and collaborating on projects or internships to provide students with real-world experiences Research and Development: Conducting research in machine learning and contributing to developing new techniques, models, or applications Constantly improve the session flow and delivery by working with other instructors, subject matter experts, and the IK team Help the IK team in onboarding and training other instructors and coaches Have regular discussions with IK's curriculum team in evolving the curriculum Should be willing to work on weekends/evenings and be available as per the Pacific time zone
Posted 2 weeks ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About the Role We’re looking for a Senior AI Engineer to bring our AI research to life. You’ll be the bridge between paper and product—scaling prototypes into production systems, integrating models into real-world workflows, and powering domain intelligence for use cases like personalized business insights over structured enterprise data. You’ll work closely with researchers, designers, and product teams to ship features that are fast, reliable, and deeply useful. ⸻ Key Responsibilities • Turn research prototypes into robust, scalable APIs and services. • Build end-to-end AI features that simplify UX and maximize utility. • Collaborate with research and frontend teams to deliver seamless AI-powered workflows. • Own engineering quality: write clean, testable code, build CI/CD pipelines, and implement monitoring and observability. • Contribute to applied research modules—LLM customization, reliability upgrades, and GenAI system integrations. • Mentor junior engineers, share best practices, and help build a culture of engineering excellence. ⸻ Qualifications • 3+ years of experience building and shipping AI systems in production. • Strong fundamentals in data structures, system design, and clean code practices. • Hands-on experience with: • Model serving and inference (e.g., Triton, vLLM, FastAPI) • Retrieval systems (vector DBs, key-value stores) • Relational databases and structured data integration • Logging, monitoring, and debugging for ML pipelines • Familiar with GenAI building blocks: prompt engineering, model APIs, agent workflows. • Skilled with dev tooling (Docker, GitHub Actions, etc.) and cloud platforms (AWS/GCP). • You’ve either been at a Tier 1 college—or built something others now rely on. • [Bonus]: DM Ayush with the keyword “gen-engine” to prove you’re detail-oriented. ⸻ Why Join Genloop? Genloop is a research-first AI company building the next generation of customized, continuously learning AI systems. We specialize in adapting open-weight LLMs to enterprise domains—embedding them with domain memory and self-improving intelligence through feedback loops. Our team includes talent from Stanford, Apple, IITs, and top-tier AI firms—united by a shared goal of building production-grade, high-impact AI. ⸻ Compensation & Benefits We offer competitive salaries, meaningful equity, and benefits designed to support great work. Compensation is based on experience, expertise, and location. ⸻ Diversity & Inclusion Genloop is an Equal Opportunity Employer. We’re committed to building a diverse and inclusive workplace where everyone has a voice—and where great ideas can come from anywhere.
Posted 2 weeks ago
6.0 years
20 - 25 Lacs
Bengaluru, Karnataka, India
Remote
:-Job Title: Machine Learning Engineer – 2 Location: Onsite – Bengaluru, Karnataka, India Experience Required: 3 – 6 Years Compensation: ₹20 – ₹25 LPA Employment Type: Full-Time Work Mode: Onsite Only (No Remote) About the Company:- A fast-growing Y Combinator-backed SaaS startup is revolutionizing underwriting in the insurance space through AI and Generative AI. Their platform empowers insurance carriers in the U.S. to make faster, more accurate decisions by automating key processes and enhancing risk assessment. As they expand their AI capabilities, they’re seeking a Machine Learning Engineer – 2 to build scalable ML solutions using NLP, Computer Vision, and LLM technologies. Role Overview:- As a Machine Learning Engineer – 2, you'll take ownership of designing, developing, and deploying ML systems that power critical features across the platform. You'll lead end-to-end ML workflows, working with cross-functional teams to deliver real-world AI solutions that directly impact business outcomes. Key Responsibilities:- Design and develop robust AI product features aligned with user and business needs Maintain and enhance existing ML/AI systems Build and manage ML pipelines for training, deployment, monitoring, and experimentation Deploy scalable inference APIs and conduct A/B testing Optimize GPU architectures and fine-tune transformer/LLM models Build and deploy LLM applications tailored to real-world use cases Implement DevOps/ML Ops best practices with tools like Docker and Kubernetes Tech Stack & Tools Machine Learning & LLMs GPT, LLaMA, Gemini, Claude, Hugging Face Transformers PyTorch, TensorFlow, Scikit-learn LLMOps & MLOps Langchain, LangGraph, LangFlow, Langfuse MLFlow, SageMaker, LlamaIndex, AWS Bedrock, Azure AI Cloud & Infrastructure AWS, Azure Kubernetes, Docker Databases MongoDB, PostgreSQL, Pinecone, ChromaDB Languages Python, SQL, JavaScript What You’ll Do Collaborate with product, research, and engineering teams to build scalable AI solutions Implement advanced NLP and Generative AI models (e.g., RAG, Transformers) Monitor and optimize model performance and deployment pipelines Build efficient, scalable data and feature pipelines Stay updated on industry trends and contribute to internal innovation Present key insights and ML solutions to technical and business stakeholders Requirements Must-Have:- 3–6 years of experience in Machine Learning and software/data engineering Master’s degree (or equivalent) in ML, AI, or related technical fields Strong hands-on experience with Python, PyTorch/TensorFlow, and Scikit-learn Familiarity with ML Ops, model deployment, and production pipelines Experience working with LLMs and modern NLP techniques Ability to work collaboratively in a fast-paced, product-driven environment Strong problem-solving and communication skills Bonus Certifications such as: AWS Machine Learning Specialty AWS Solution Architect – Professional Azure Solutions Architect Expert Why Apply Work directly with a high-caliber founding team Help shape the future of AI in the insurance space Gain ownership and visibility in a product-focused engineering role Opportunity to innovate with state-of-the-art AI/LLM tech Be part of a fast-moving team with real market traction 📍 Note: This is an onsite-only role based in Bengaluru. Remote work is not available. Skills: postgresql,docker,llms and modern nlp techniques,machine learning,computer vision,tensorflow,scikit-learn,pytorch,llm technologies,python,nlp,aws,ml, ai,sql,ml ops,azure,javascript,software/data engineering,kubernetes,mongodb,python, pytorch/tensorflow, and scikit-learn
Posted 3 weeks ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Minimum qualifications: Master's degree in Statistics or Economics, a related field, or equivalent practical experience. 8 years of work experience using analytics to solve product or business problems, coding (e.g., Python, R, SQL), querying databases or statistical analysis, or 6 years of work experience with a PhD degree. Experience with statistical data analysis such as linear models, multivariate analysis, causal inference, or sampling methods. Experience with statistical software (e.g., SQL, R, Python, MATLAB, pandas) and database languages along with Statistical Analysis, Modeling and Inference. Preferred qualifications: Experience translating analysis results into business recommendations. Experience understanding potential outcomes framework and with causal inference methods (e.g., split-testing, instrumental variables, difference-in-difference methods, fixed effects regression, panel data models, regression discontinuity, matching estimators). Experience selecting tools to solve data analysis issues. Experience articulating business questions and using data to find a solution. Knowledge of structural econometric methods. About the job At Google, data drives all of our decision-making. Quantitative Analysts work all across the organization to help shape Google's business and technical strategies by processing, analyzing and interpreting huge data sets. Using analytical excellence and statistical methods, you mine through data to identify opportunities for Google and our clients to operate more efficiently, from enhancing advertising efficacy to network infrastructure optimization to studying user behavior. As an analyst, you do more than just crunch the numbers. You work with Engineers, Product Managers, Sales Associates and Marketing teams to adjust Google's practices according to your findings. Identifying the problem is only half the job; you also figure out the solution. Responsibilities Interact cross-functionally with a variety of leaders and teams, and work with Engineers and Product Managers to identify opportunities for design and to assess improvements for advertising measurement products. Collaborate with teams to define questions about advertising effectiveness, incrementality assessment, the impact of privacy, user behavior, brand building, bidding etc., and develop and implement quantitative methods to answer those questions. Work with large, complex data sets. Solve difficult, non-routine analysis problems, applying advanced analytical methods as needed. Conduct analyses that include data gathering and requirements specification, exploratory data analysis (EDA), model development, and delivery of results to business partners and executives. Build and prototype analysis pipelines iteratively to provide insights at scale. Develop knowledge of Google data structures,metrics, advocating for changes where needed for product development. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .
Posted 3 weeks ago
12.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position Summary... About Team The Conversational AI team at Walmart builds and deploys core AI assistant experiences across Walmart. The team builds the core AI platform, which powers multiple conversational assistants for shopping, customer care and employee assistance across Walmart’s US and International markets. With tens of millions of active users across multiple countries these are among the largest vertical AI assistant experiences in the industry. The team is part of a larger “Emerging Tech” org, which is focussed on utilising emerging technologies like Conversational AI, Extended Reality, Spatial awareness, etc to reimagine and build intuitive and immersive experiences for our customers, sellers and associates. We are looking for a Principal data scientist to lead the next evolution of the AI assistant platform by defining and building highly scalable Generative AI systems and infrastructure. This will be a hands-on technical leadership role which requires expertise at the intersection of machine learning, LLMs, ASR, large-scale distributed systems, and more. Our AI assistants are rapidly incorporating LLMs, agent-based architecture with custom fine-tuned models, and multiple data modalities. You can see more about the work our team does here in these articles: https://medium.com/walmartglobaltech/tagged/voice-assistant https://www.forbes.com/sites/bernardmarr/2024/02/15/the-amazing-ways-walmart-is-using-generative-ai What you'll do... About Team The Conversational AI team at Walmart builds and deploys core AI assistant experiences across Walmart. The team builds the core AI platform, which powers multiple conversational assistants for shopping, customer care and employee assistance across Walmart’s US and International markets. With tens of millions of active users across multiple countries these are among the largest vertical AI assistant experiences in the industry. The team is part of a larger “Emerging Tech” org, which is focused on utilizing emerging technologies like Conversational AI, Extended Reality, Spatial awareness, etc. to reimagine and build intuitive and immersive experiences for our customers, sellers and associates. We are looking for a Principal data scientist to lead the next evolution of the AI assistant platform by defining and building highly scalable Generative AI systems and infrastructure. This will be a hands-on technical leadership role which requires expertise at the intersection of machine learning, LLMs, ASR, large-scale distributed systems, and more. Our AI assistants are rapidly incorporating LLMs, agent-based architecture with custom fine-tuned models, and multiple data modalities. What You Will Do In this role, as a principal data scientist you will own the technical roadmap and architecture for multiple initiatives within the Conversational AI platform. The responsibilities include: Partner with key business stakeholders and be a thought leader in the Conversational AI space for driving the development and planning of POCs and production AI solutions. Translate business requirements into strategies, initiatives, and projects. Align these with business strategy and objectives. Drive the execution of the deliverables. Design, build, test and deploy cutting edge AI solutions at scale, impacting millions of customers worldwide. Lead multiple initiatives within the platform, with focus on efficiency, innovation, and leverage. Collaborate with the applied scientists, ML engineers, software engineers and product managers to develop next generation of AI assistant experiences. Be up-to date on industry trends in the latest Generative AI, Speech processing and AI assistant architecture patterns (e.g. agent chaining, COT, RAG, LLM guardrails etc) Provide technical leadership, guidance and mentorship to highly skilled and motivated data scientists in the team. Lead innovation and efficiency through the complete problem-solving cycle, from approach to methods to development and results. Partner and engage with associates in other regions for delivering the best services to customers around the globe. Proactively participate in the external community to strengthen Walmart's brand and gain insights into industry practices. Drive innovation in the charter and publish research in Rank A+ AI conferences like ICML, AAAI, NIPS, ACL, etc. What You Will Bring Master's with 12+ years or Ph.D. with 10+ years of relevant experience. Educational qualifications should be Computer Science/Statistics/Mathematics or related area. Strong track record in a data science tech lead role (5+ years), with deployment of large-scale AI services. Extensive experience in the design, development, and delivery of AI products with a large customer base, preferably in conversational AI, speech, vision or machine learning based systems. Strong experience in machine learning: Gen AI, NLP, Speech processing, Image processing, Classification models, regression models, Forecasting, Unsupervised models, Optimization, Graph ML, Causal inference, Causal ML, Statistical Learning, experimentation & Gen-AI. Deep and demonstrated personal interest in generative AI space including awareness of latest architectural advancements in building generative AI applications. Excellent decision-making skills with the ability to balance conflicting interests in a complex and fast-paced environment. Deep experience in simultaneously leading multiple data science initiatives end to end – from translating business needs to analytical asks, leading the process of building solutions and the eventual act of deployment and maintenance. Thorough understanding of distributed technologies, public cloud technologies, scalable platforms, ML- platforms and Operational Excellence. Experience working with geographically distributed teams. Business acumen; combining technical vision with business insights. Research acumen; with papers published in top tier AI conferences like AAAI, NIPS, ICML, KDD, etc. Strong Experience with big data platforms – Hadoop (Hive, Map Reduce, HQL, Scala). Strong Experience in Python. About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. That’s what we do at Walmart Global Tech. We’re a team of software engineers, data scientists, cybersecurity expert's and service professionals within the world’s leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone is—and feels—included, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, we’re able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelors degree in Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology or related field and 5 years' experience in an analytics related field. Option 2: Masters degree in Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology or related field and 3 years' experience in an analytics related field. Option 3: 7 years' experience in an analytics or related field. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Primary Location... 4,5,6, 7 Floor, Building 10, Sez, Cessna Business Park, Kadubeesanahalli Village, Varthur Hobli , India R-2187100
Posted 3 weeks ago
6.0 years
0 Lacs
India
On-site
About Tricket: Tricket, a division of Helpen.In Enterprises Private Limited, recognised by Startup Bihar, is a dynamic and rapidly growing skill game. As we continue to expand, we are seeking an experienced Backend Developer on a Full-time/Part-time basis to contribute to our exciting projects. Role Overview: As a Backend Developer, you will play a crucial role in the development and optimisation of Tricket's backend systems. We are looking for a candidate with a strong background in Node.js, JavaScript, and REST APIs, Machine Learning, LLMs and Generative AI, and experience with cloud platforms such as AWS and Firebase. Responsibilities: Design, develop, and maintain scalable backend systems using Node.js and JavaScript. Implement and optimise REST APIs for seamless integration with front-end components. Work closely with the data science team to integrate Machine Learning models into production environments. Contribute to data pipeline design, model inference APIs, and continuous ML model improvement strategies. Utilise cloud services such as Azure, AWS, and Firebase to enhance system performance and reliability. Qualifications: 6+ years of experience in backend development with a focus on Node.js and JavaScript. Proficiency in designing and implementing RESTful APIs. Experience with cloud platforms such as AWS, and Firebase. Strong problem-solving and communication skills. How to Apply: If you are a seasoned Backend Developer with expertise in Node.js, JavaScript, and cloud services, we invite you to apply. Please submit your resume and a detailed cover letter highlighting your relevant experience to team@tricket.in, or apply directly here. Helpen.In Enterprises Private Limited is an equal opportunity employer and values diversity in the workplace. Join us in revolutionising the gaming experience at Tricket!
Posted 3 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Responsibilities: Deploy, scale, and manage large language models (LLMs) in production environments, ensuring optimal resource usage and performance. Design, implement, and manage CI/CD pipelines to automate the software delivery process, ensuring fast and reliable deployments. Monitor and analyze model performance in real-time, addressing issues like model drift, latency, and accuracy degradation, and initiating model retraining or adjustments when necessary. Manage cloud environments (AWS, GCP, Azure etc.,) to provision and scale infrastructure to meet the needs of training, fine-tuning, and inference for large models. Collaborate with development teams to integrate CI/CD pipelines into the development workflow, promoting continuous integration and delivery best practices. Implement infrastructure as code (IaC) using tools such as Terraform, Ansible, or CloudFormation to automate the provisioning and management of infrastructure. Manage and maintain cloud infrastructure on platforms such as AWS, Azure, or Google Cloud, ensuring scalability, security, and reliability. Develop and implement monitoring, logging, and alerting solutions to ensure the health and performance of applications and infrastructure. Work closely with security teams to integrate security practices into the CI/CD pipelines, ensuring compliance with industry standards and regulations. Optimize build and release processes to improve efficiency and reduce deployment times, implementing strategies such as parallel builds and incremental deployments. Automate testing processes within the CI/CD pipelines to ensure high-quality software releases, including unit tests, integration tests, and performance tests. Manage and monitor version control systems, such as Git, to ensure code integrity and facilitate collaboration among development teams. Provide technical support and troubleshooting for CI/CD-related issues, ensuring timely resolution and minimal disruption to development workflows. Develop and maintain documentation for CI/CD pipelines, infrastructure configurations, and best practices, ensuring clarity and accessibility for team members. Stay updated on the latest trends and advancements in DevOps, CI/CD, and cloud computing, and incorporate new tools and practices into the organization's workflows. Lead and participate in code reviews and technical discussions, providing insights and recommendations for continuous improvement. Conduct training sessions and workshops for internal teams to promote knowledge sharing and best practices in DevOps and CI/CD. Collaborate with IT and development teams to implement and manage containerization solutions using Docker and orchestration platforms such as Kubernetes. Implement and manage configuration management solutions to maintain consistency and manage changes across environments. Develop and implement disaster recovery and business continuity plans to ensure the resilience and availability of applications and infrastructure. Optimize resource utilization and cost management for cloud infrastructure, implementing strategies such as auto-scaling and resource tagging. Facilitate communication between development, operations, and business stakeholders to ensure alignment on DevOps goals and practices. Participate in the evaluation and selection of DevOps tools and technologies that align with organizational goals and improve software delivery processes. Manage and monitor application performance, implementing strategies to optimize performance and resolve bottlenecks. Ensure compliance with organizational policies and industry regulations related to software development and deployment. Required Skills: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Extensive experience in DevOps practices and CI/CD implementation. Strong proficiency in CI/CD tools such as Jenkins, GitLab CI, or CircleCI. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Proficiency in infrastructure as code (IaC) tools such as Terraform, Ansible, or CloudFormation. Strong understanding of containerization and orchestration platforms such as Docker and Kubernetes. Experience with monitoring, logging, and alerting tools such as Prometheus, Grafana, ELK Stack, or Datadog. Proficiency in scripting languages such as Python, Bash, or PowerShell. Strong understanding of version control systems such as Git. Excellent problem-solving and analytical skills, with the ability to troubleshoot and resolve technical issues. Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders. Certification in DevOps or cloud platforms (e.g., AWS Certified DevOps Engineer, Azure DevOps Engineer Expert) is preferred. Preferred Skills: Experience with monitoring, logging, and alerting tools such as Prometheus, Grafana, ELK Stack, or Datadog. Proficiency in scripting languages such as Python, Bash, or PowerShell. Strong understanding of version control systems such as Git. Excellent problem-solving and analytical skills, with the ability to troubleshoot and resolve technical issues. Strong communication and collaboration skills, with the ability to work effectively with cross-functional teams and stakeholders. Certification in DevOps or cloud platforms (e.g., AWS Certified DevOps Engineer, Azure DevOps Engineer Expert) is preferred.
Posted 3 weeks ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Computer Vision & Machine Learning Lead Engineer We are seeking Computer Vision & Machine Learning Engineer with strong software development expertise who can architect & develop ML based solutions for computer vision applications and deploy them at scale. Minimum qualifications: Bachelor’s or Master’s degree in computer science, Electrical Engineering, Information Systems, or a related field. 7+ years of extensive software development experience in Python, Pytorch, reading/debugging code in Python, C++ & Shell 4+ years of experience directly working on ML based solutions preferable convolutional neural networks applied to computer vision problem statements Proficiency in software design and architecture and Object-Oriented programming. Experience working with docker or similar containerization frameworks along with container orchestration Experience with Linux/Unix or similar systems, from the kernel to the shell, file systems, and client-server protocols. Experience troubleshooting and resolving technical issues in an application tech stack including AI/ML. Solid understanding of common SQL and No SQL databases Experience working with AWS or similar platforms Strong communication skills and ability to work effectively in a team. Preferred qualifications: Experience working with distributed clusters and multi-node environment. Familiar with basics of web technologies and computer networking AWS certifications or similar Formal academic background in Machine Learning Experience working with large image datasets (100K+ images) Responsibilities Architect and develop Machine Learning Based computer vision algorithms for various applications Responsible for delivering software and solutions while meeting all quality standards Design, implement and optimize machine learning training & inference pipelines and algorithms on cloud or on-prem hardware Understand functional and non-functional requirements of features and breakdown tasks for the team Take ownership of delivery for self as well as team Collaborate closely with product owners and domain/technology experts to integrate and validate software within a larger system. Engage with internal teams and provide support to teams located in North America & Europe Base Skillsets Python, Pytorch, one of the cloud platform AWS / GCP / Azure, Linux, Docker, Database Optional Skillsets: Databricks, MLOps, CI/CD
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: We are seeking a talented Computer Vision Engineer with strong expertise in microservice deployment architecture to join our team. In this role, you will be responsible for developing and deploying computer vision models to analyze retail surveillance footage for use cases such as theft detection, employee efficiency monitoring, and store traffic analysis. You will work on designing and implementing scalable, cloud-based microservices to deliver real-time and post-event analytics to improve retail operations. Key Responsibilities: • Develop computer vision models: Build, train, and optimize deep learning models to analyze surveillance footage for detecting theft, monitoring employee productivity, tracking store busy hours, and other relevant use cases. • Microservice architecture: Design and deploy scalable microservice-based solutions that allow seamless integration of computer vision models into cloud or on-premise environments. • Data processing pipelines: Develop data pipelines to process real-time and batch video data streams, ensuring efficient extraction, transformation, and loading (ETL) of video data. • Integrate with existing systems: Collaborate with backend and frontend engineers to integrate computer vision services with existing retail systems such as POS, inventory management, and employee scheduling. • Performance optimization: Fine-tune models for high accuracy and real-time inference on edge devices or cloud infrastructure, optimizing for latency, power consumption, and resource constraints. • Monitor and improve: Continuously monitor model performance in production environments, identify potential issues, and implement improvements to accuracy and efficiency. •Security and privacy: Ensure compliance with industry standards for security and data privacy, particularly regarding the handling of video footage and sensitive information. Experience: •5+ years of proven experience in computer vision, including object detection, action recognition, and multi-object tracking, preferably in retail or surveillance applications. •Hands-on experience with microservices deployment on cloud platforms (e.g., AWS, GCP, Azure) using Docker, Kubernetes, or similar technologies. •Experience with real-time video analytics, including working with large-scale video data and camera feeds. Technical Skills: •Proficiency in programming languages like Python, C++, or Java. •Expertise in deep learning frameworks (e.g., TensorFlow, PyTorch, Keras) for developing computer vision models. •Strong understanding of microservice architecture, REST APIs, and serverless computing. •Knowledge of database systems (SQL, NoSQL), message queues (Kafka, RabbitMQ), and container orchestration (Kubernetes). •Familiarity with edge computing and hardware acceleration (e.g., GPUs, TPUs) for running inference on embedded systems. Preferred Qualifications: •Experience with deploying models to edge devices (NVIDIA Jetson, Coral, etc.). •Understanding of retail operations and common challenges in surveillance. •Knowledge of data privacy regulations such as GDPR or CCPA. Soft Skills: •Strong analytical and problem-solving skills. •Ability to work independently and in cross-functional teams. •Excellent communication skills to convey technical concepts to non-technical stakeholders. Benefits: • Competitive salary and stock options. • Health insurance. If you're passionate about creating cutting-edge computer vision solutions and deploying them at scale to transform retail operations, we’d love to hear from you! Apply Now
Posted 3 weeks ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: OCI Cloud Gen AI Architect Location: Noida Experience: 10-15 Years Work mode: Contract Duration: 6 Months Job Deacription We are looking for an experienced OCI AI Architect to lead the design and deployment of Gen AI, Agentic AI, and traditional AI/ML solutions on Oracle Cloud. This role requires a deep understanding of Oracle Cloud Architecture, Gen AI, Agentic and AI/ML frameworks, data engineering, and OCI-native services. The ideal candidate combines deep technical expertise in AI/ML and Gen AI over OCI with domain knowledge in Finance and Accounting. Key Responsibilities Design, Architect and deploy AI/ML and Gen AI solutions on OCI using native AI services (e.g., OCI Data Science, AI Services, Gen AI Agents) that are secure, scalable and optimized for performance & cost. Build agentic AI solutions using frameworks such as LangGraph, CrewAI, AutoGen. Design & lead the development of machine learning AI / ML pipelines and fine-tuning pipelines to adapt foundation LLMs to enterprise-specific datasets Work with embedding models and OCI Vector Database to implement RAG Solutions. Provide technical guidance and best practices around MLOps, model versioning, deployment automation, and AI governance. Collaborate with functional SME’s, application teams, and business stakeholders to identify AI opportunities and design the right cloud and ai first solutions. Advocate for OCI-native capabilities and continuously evaluate new Oracle AI services. Support customer presentations & solution demos, as needed with architectural insights and technical credibility. Skills Required 10–15 years in Oracle Cloud and AI, with 5+ years of proven experience in designing, architecting and deploying AI/ML & Gen AI Solution over OCI AI stack. Strong Python development experience, streamlit, XML, JSON etc. Deep hands-on knowledge of LLMs (e.g. Cohere, GPT etc.) & prompt engineering techniques (e.g. zero-shot, few-shot, CoT, and ReAct) especially for Cohere, OpenAI Models. Strong knowledge of AI governance, security, guardrails, and responsible AI. Must have knowledge of data ingestion, feature engineering, model training, evaluation, deployment, and monitoring. Proficient in AI/ML/Gen AI frameworks (e.g., TensorFlow, PyTorch, Hugging Face, LangChain) & Vector DBs (e.g., Pinecone, Milvus). Proficient in Agentic AI Frameworks (e.g. CrewAI, AutoGen and multi-agent orchestration workflow) Deep knowledge of OCI services including: OCI Data Science, OCI AI Services (Vision, Speech, Language, Anomaly Detection), OCI Gen AI Service, OCI Gen AI Agents, Oracle ATP, Data Flow (Apache Spark), OCI Functions, API Gateway, and Monitoring Solid understanding of AI architecture principles: data lifecycle, feature stores, model registries, inference serving, feedback loops. Experience with Vector Databases, RAG architecture, and integrating Large Language Models (LLMs). Strong leadership, communication, and stakeholder management abilities. Must have working experience of implementing semantic matching, fuzzy logic, and similarity scoring algorithms (e.g., cosine similarity, Jaccard, Levenshtein distance) to drive intelligent entry matching in noisy or inconsistent data scenarios. Strong understanding and practical application of: Prompt engineering, Fine-tuning and parameter-efficient tuning, Agentic orchestration workflows Must have experience working with Oracle ATP, 23ai Databases and vector queries Good to have experience in front end programming languages such as React, Angular or Javascript etc. Good to have: Experience with Finance domain solutions, particularly around reconciliations, journal entry matching, or financial close processes. Good to Understand Oracle Cloud Deployment, architecture, networking, Security and other essential components Good to have knowledge of Analytics and Data Science. Qualifications Oracle Cloud certifications such as: OCI Architect Professional, OCI Generative AI Professional, OCI Data Science Professional B.Tech – Computer Science / MCA or equivalent degrees Any degree or diploma in AI would be preferred.
Posted 3 weeks ago
5.0 years
0 Lacs
India
On-site
What You'll Do Design and implement high-performance backend systems capable of handling billions of data points daily Work closely with researchers and ML engineers to productionize large-scale PyTorch models Develop low-latency, GPU-accelerated pipelines using Rust , CUDA , and C++ Build and maintain distributed computing systems for training and inference Profile, optimize, and debug code running on heterogeneous hardware (CPUs, GPUs) Take ownership of critical infrastructure that powers our AI systems Collaborate on system architecture to ensure scalability, robustness, and observability Languages: Rust, C++, Python ML Frameworks: PyTorch, CUDA Infrastructure: Kubernetes, gRPC, Kafka, Redis, AWS/GCP Tools: Prometheus, Grafana, Docker, Bazel, GitHub Actions Have 5+ years of backend engineering experience (systems, infrastructure, distributed computing) Have shipped or maintained high-throughput systems (millions+ RPS or equivalent) Are proficient in Rust or similar systems-level languages (C++, Go, etc.) Have hands-on experience with CUDA , GPU kernels , or ML inference acceleration Are familiar with PyTorch internals or have worked closely with ML researchers Think deeply about performance, security, and system architecture Are motivated by the idea of contributing to Superintelligence development
Posted 3 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
About Us Are you ready to build the future of supply chain? At Gather AI, we're not just creating software; we're pioneering a new era of warehouse intelligence. We've developed a groundbreaking, vision-powered platform that uses autonomous drones and existing equipment to capture real-time data, completely digitizing workflows that have historically been manual and error-prone. This means facilities operate smarter, safer, and more efficiently, ultimately redefining "on-time, in full" delivery. If you're looking for an opportunity to contribute to truly transformative technology and make a significant impact in a vital industry, Gather AI is the place for you. We're leading the charge in the rapidly evolving robotics industry, and we invite you to join us in reshaping the global supply chain, one intelligent warehouse at a time. About You We're looking for a forward-thinking engineer to help build cutting-edge interfaces on mobile (iOS and, eventually, Android) edge-compute devices used in warehouse environments. You'll take on the challenge of enabling seamless coordination between human operators and autonomous/AI-driven systems. The ideal candidate has experience architecting and implementing embedded iOS applications with complex logic or workflows, and is looking to design interactive applications that fuse information from human operators with ML/computer vision informed feedback about the real world. You should be comfortable working with computer networking and have experience—or a strong interest—in complex multi-agent or multi-actor systems. Think of it like designing a multiplayer video game, where your application integrates inputs from various sources (e.g., cameras, forklift operators, ML/computer vision pipelines, and real-time localization systems) to infer the current state of the forklift and guide the operator toward the next best action. What You'll Do Contribute major features to the application (including UI/UX) of our Material Handling Equipment (MHE) Vision platform - gather insights from cameras, an ML inference engine, and an iPad application mounted locally on forklifts; guide operators to take the right action based on what we are seeing/observing in the world. Develop major improvements to our iOS-based autonomous drone control application, including debugging and potentially working with the drone's core software. Become a reliable resource for identifying and fixing issues in both MHE Vision and drone products. Assist with build and test automation, such as by introducing tools to streamline remote development and debugging (for yourself and the team). Build great tools that verify the proper function of complex, multi-agent/multi-actor systems. What You'll Need BS in Computer Science/Engineering or equivalent technical experience. At least 5 years of experience in developing embedded applications (e.g., iOS, C++, Android) or related technologies. Proficiency in Objective-C and Swift, and comfort with C++, as our dual-platform application incorporates C++ logic that powers our drones. Ability to leverage the latest AI/LLM technologies to accelerate development. Exposure to embedded/application ecosystems (Linux, iOS, Android). Solid networking and multi-threading experience - good architectural intuition. Bonus points for... Prior experience in a startup or small, fast-paced company environment. Experience building cross-platform (iOS and Android) applications, e.g. by using React Native. Experience or interest in developing CPU-constrained, soft real-time applications (e.g., video games). Familiarity with frameworks and technologies like MQTT, automotive/embedded interfaces, and ROS. Experience with cloud system interactions, external APIs, local application storage, and other features common in iOS or embedded applications. Experience with CI/CD testing tools and software release via enterprise app deployment lifecycles (e.g., Fastlane, GitHub Actions, Firebase). A passion for robotics or other real-time multi-agent technologies, and the unique challenges found in this space. Compensation And Benefits Compensation package will include equity Comprehensive health insurance Very flexible schedule Customized PTO Relocation assistance available If this sounds like a good fit, we'd love to meet you. Come help us change the world!
Posted 3 weeks ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Python Developer within our Information Technology department, your primary responsibility will be to leverage your expertise in Artificial Intelligence (AI), Machine Learning (ML), and Generative AI. We are seeking a candidate who possesses hands-on experience with GPT-4, transformer models, and deep learning frameworks, along with a profound comprehension of model fine-tuning, deployment, and inference. Your key responsibilities will include designing, developing, and maintaining Python applications that are specifically tailored towards AI/ML and generative AI. You will also be involved in building and refining transformer-based models such as GPT, BERT, and T5 for various NLP and generative tasks. Working with extensive datasets for training and evaluation purposes will be a crucial aspect of your role. Moreover, you will be tasked with implementing model inference pipelines and scalable APIs utilizing FastAPI, Flask, or similar technologies. Collaborating closely with data scientists and ML engineers will be essential in creating end-to-end AI solutions. Staying updated with the latest research and advancements in the realms of generative AI and ML is imperative for this position. From a technical standpoint, you should demonstrate a strong proficiency in Python and its relevant libraries like NumPy, Pandas, and Scikit-learn. With at least 7+ years of experience in AI/ML development, hands-on familiarity with transformer-based models, particularly GPT-4, LLMs, or diffusion models, is required. Experience with frameworks like Hugging Face Transformers, OpenAI API, TensorFlow, PyTorch, or JAX is highly desirable. Additionally, expertise in deploying models using Docker, Kubernetes, or cloud platforms like AWS, GCP, or Azure will be advantageous. Having a knack for problem-solving and algorithmic thinking is crucial for this role. Familiarity with prompt engineering, fine-tuning, and reinforcement learning with human feedback (RLHF) would be a valuable asset. Any contributions to open-source AI/ML projects, experience with vector databases, building AI chatbots, copilots, or creative content generators, and knowledge of MLOps and model monitoring will be considered as added advantages. In terms of educational qualifications, a Bachelor's degree in Science (B.Sc), Technology (B.Tech), or Computer Applications (BCA) is required. A Master's degree in Science (M.Sc), Technology (M.Tech), or Computer Applications (MCA) would be an added benefit for this role.,
Posted 3 weeks ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Are you passionate about the intersection of data, technology and science, and excited by the potential of Real-World Data (RWD) and AI? Do you thrive in collaborative environments and aspire to contribute to the discovery of groundbreaking medical insights? If so, join the data42 team at Novartis! At Novartis, we reimagine medicine by leveraging state-of-the-art analytics and our extensive internal and external data resources. Our data42 platform grants access to high-quality, multi-modal preclinical and clinical data, along with RWD, creating the optimal environment for developing advanced AI/ML models and generating health insights. Our global team of data scientists and engineers utilizes this platform to uncover novel insights and guide drug development decisions. As an RWD SME / RWE Execution Data Scientist, you will focus on executing innovative methodologies and AI models to mine RWD on the data42 platform. You will be the go-to authority for leveraging diverse RWD modalities patterns crucial to understanding patient populations, biomarkers, and drug targets, accelerating the development of life-changing medicines. About The Role Duties and Responsibilities: Collaborate with R&D stakeholders to co-create and implement innovative, repeatable, scalable and automated data and technology solutions in line with data42 strategy. Be a data Subject Matter Expert (SME), understand Real World Data (RWD) of different modalities, vocabularies (LOINC, ICD, HCPCS etc.), non-traditional RWD (Patient reported outcomes, Wearables and Mobile Health Data) and where and how they can be used, including in conjunction with clinical data, omics data, pre-clinical data, and commercial data. Contribute to data strategy implementation such as Federated Learning, tokenization, data quality frameworks, regulatory requirements (submission data to HL7 FHIR formats conversion, Sentinel initiative), conversion to common data models and standards (OMOP, FHIR, SEND etc.), FAIR principles and integration with enterprise catalog Define and execute advanced integrated and scalable analytical approaches and research methodologies (including industry trends) in support of exploratory and regulatory use of AI models for RWD analysis across the Research Development Commercial continuum by facilitating research questions. Stay current with emerging applications and trends, driving the development of advanced analytic capabilities for data42 across the Real-world evidence generation lifecycle, from ideation to study design and execution. Demonstrate high agility working across various cross-located and cross-functional associates across business domains (commercial, Development, Biomedical Research) or Therapeutic area divisions for our priority disease areas to execute complex and critical business problems with quantified business impact/ROI. Ideal Candidate Profile PhD or MSc. in a quantitative discipline (e.g., but not restricted to Computer Science, Physics, Statistics, Epidemiology) with proven expertise in artificial Intelligence / Machine Learning. 8+ years of relevant experience in Data Science (or 4+ years post-qualification in case of PhD). Extensive experience in Statistical and Machine Learning techniques: Regression, Classification, Clustering, Design of Experiments, Monte Carlo Simulations, Statistical Inference, Feature Engineering, Time Series Forecasting, Text Mining, and Natural Language Processing, LLMs, and multi-modal Generative AI. Good to have skills: Stochastic models, Bayesian Models, Markov Chains, Optimization techniques including, Dynamic Programming Deep Learning techniques on structured and unstructured data, Recommender Systems. Proficiency in tools and packages: Python, R(optional), SQL; exposure to dashboard or web-app building using PowerBI, R-Shiny, Flask, open source or proprietary software and packages is an advantage. Knowledge in data standards e.g. OHDSI OMOP, and other data standards, FHIR HL7 for regulatory, and best practices. Good to have: Foundry, big data programming, working knowledge of executing data science on AWS, DataBricks or SnowFlake Strong in Matrix collaboration environments with good communication and collaboration skills with country/ regional/ global stakeholders in an individual contributor capacity. Novartis is committed to building an outstanding, inclusive work environment and diverse teams representative of the patients and communities we serve. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards
Posted 3 weeks ago
5.0 years
0 Lacs
India
Remote
Weave is looking for engineers hungry for fun challenges who can join our self-empowered teams and contribute in both technical and non-technical ways. You will be joining a team of talented developers that share a common interest in distributed backend systems, data, scalability, and continued development. You will get a chance to apply these, and other skills, to new and ongoing projects to make machine learning more approachable, data more available, and easier to discover and use by helping design how teams build out AI powered features at Weave. Our teams are cross-functional agile teams composed of a product owner, backend and frontend devs and devops. Teams are highly autonomous with the ownership and ability to act in Weave’s best interest. Above all, your work will impact the way our customers experience Weave while working closely with a highly skilled team to accomplish varying goals and cultivate our phenomenal culture. Purpose The Machine Learning Team's mission is to enable product innovation by making it painless for developers to build ai powered applications that require access to large sets of data. Machine learning is challenging but we are striving to democratize access to the tools and technology that powers it so teams can build cutting edge features safely and responsibly without a PhD in Data Science. As a Machine Learning Engineer on the team you’ll be building models for new products with emerging technologies, at scale. We handle data for hundreds of millions of people daily. We currently have two openings available Staff Machine Learning Engineer and Senior Machine Engineer. We will determine level based on experience and technical competencies. This position is fully remote in India You will report to: Director of Engineering What You Will Own Design and Develop machine learning infrastructure, tooling, and models to help teams deliver world class experiences. Help product and development teams understand the data lifecycle and the inherent experimental nature of machine learning. Build internal products and platforms to enable teams to incorporate AI into their features and customer facing products. Consult with teams to help them understand common patterns, anti-patterns, and tradeoffs of machine learning. Guide them through creating excellent customer experiences end to end. Build scalable, resilient services to support data integration, event processing, and platform extensions. Contribute to the continued evolution of product functionality that services large amounts of data and traffic. Write code that is high-quality, performant, sustainable, and testable while holding yourself accountable for the quality of the code you produce. Coach and collaborate inside and outside the team. You enjoy working closely with others - helping them grow by sharing expertise and encouraging best practices. Work in a cloud environment, considering the implementation of functionality through several distributed components and services. Work with our stakeholders to translate product goals into actionable engineering plans. What You Will Need To Accomplish The Job High integrity, team-focused approach, and collaboration skills to build tight-knit relationships across Weave with various roles and stakeholders. Responsive person with a strong bias for action. 5+ years of experience in any structured back-end language, i.e. Go, Java or Python (Go and Python experience is a plus). Experience moving and storing TBs of data or 100M’s to 10B’s of records. Experience building and deploying ML driven B2B multi-tenant applications in production environments. Experience with common ML technologies such as Python, Jupyter, Workflow Engines (Dagster, MLFlow, KubeFlow, etc), DVC, Triton Server, LLMs, Postgres, and others. Experience with modern ML tools and techniques such as LLMs, RAG, Prompt Engineering, Fine Tuning, multi-modal models, and others. Experience with data labelling or annotation for audio or text use cases. Understanding of distributed systems and building scalable, redundant, and observable services. Expertise in designing and architecting systems for distributed data sets and services. Experience building solutions to run on one or more of the public clouds (e.g., AWS, GCP, etc.). Experience providing stable well designed libraries and SDKs for internal use. Self driven and a thirst for learning in a quickly changing industry. Demonstrated track record of delivering complex projects on time and have experience working in enterprise-grade production environments. Strategic thinker with a strong technical aptitude and a passion for execution. What Will Make Us Love You A background with data analysis, visualization, and presentation. 3+ years of experience in data science, machine learning, or predictive analytics in addition to engineering experience. Experience with natural language models, embeddings, and inference in production, at scale. Experience with real-time audio models and voice use cases such as transcription, ASR pipelines with interruption detection, audio alignment, and speech synthesis. Experience with emerging technologies such as Model Context Protocol (MCP). Proficient understanding of containers, orchestrators, and usage patterns at scale including networking, storage, service meshes, and multi-cluster communication. Experience with Kubernetes or GKE and the Operator Pattern (GCP), specifically, a plus. Experience with highly sensitive data such as PHI (HIPAA) and PII data. Experience with automation and container based workflow engines. Experience with GitOps, IaC, and configuration driven systems. A preference for open source solutions. A track record of clean abstractions and simple to use APIs. Deep understanding of distributed data technologies such as streaming, data mesh, data lakes, warehouses, or distributed machine learning. A desire to advance the state of the art with new and innovative technologies. Enjoys working in a greenfield environment using rapid prototyping. Enjoys working with open-ended, evolving problems, and domains. Weave is an equal opportunity employer that is committed to fostering an inclusive workplace where all individuals are valued and supported. We welcome anyone who is hungry to learn, problem-solve and progress regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, or other applicable legally protected characteristics. If you have a disability or special need that requires accommodation, please let us know. All official correspondence will occur through Weave branded email. We will never ask you to share bank account information, cash a check from us, or purchase software or equipment as part of your interview or hiring process.
Posted 3 weeks ago
2.0 years
0 Lacs
India
On-site
We are looking for a data-driven problem solver with strong statistical expertise to join our growing data science team. In this role, you’ll play a pivotal part in driving product innovation through rigorous experimentation, measurement, and insight generation. You'll collaborate with engineers, designers, and product managers to guide strategic decisions that impact millions of users. Responsibilities Design and evaluate controlled experiments (A/B tests) and quasi-experiments to inform product and business decisions. Build and maintain automated pipelines for experiment tracking, analysis, and reporting. Identify metrics, set success criteria, and quantify trade-offs in product features and changes. Partner with cross-functional teams to embed a culture of data-informed decision-making. Present findings to stakeholders with clarity, making statistical results actionable for the business. Qualifications Bachelor’s in Statistics, Data Science, Computer Science, or related field. 2+ years of experience in product analytics, experimentation, or applied statistics. Solid knowledge of statistical inference, hypothesis testing, power analysis, and regression models. Proficient in SQL and Python (or R), with experience analyzing large-scale data. Strong data visualization skills; able to tell compelling stories with data. Experience working closely with product or engineering teams in fast-paced environments. Bonus Points Experience with experimentation platforms or custom A/B testing frameworks. Familiarity with online metrics, feature rollouts, or cohort-based analyses. Exposure to techniques such as CUPED, variance reduction, or sequential testing.
Posted 3 weeks ago
14.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! The Opportunity Develop ML models, platforms, and services for Adobe Express, covering the entire ML lifecycle. About The Team The AI Foundation Team at Adobe Express aims to develop a groundbreaking AI stack using internal and external technologies to improve feature speed and quality. Develop ML models and services, collaborate with teams, improve user experience at Adobe Express. What You'll Do Research, design, and implement advanced ML models and pipelines for training and inference at scale, including techniques in computer vision, NLP, deep learning, and generative AI. Integrate Large Language Models (LLMs) and agent-based frameworks to support multimodal creative workflows, enabling rich, context-aware content generation and dynamic user experiences. Collaborate with multi-functional teams to translate product requirements into ML solutions, iterating from proof-of-concept to fully productionized services. Develop robust platforms for continuous model training, experimentation, A/B testing, and monitoring, ensuring that model quality and relevance remain consistently high. Leverage distributed computing technologies and cloud infrastructures to handle large-scale data processing, feature engineering, and real-time inference, optimizing for performance and cost-efficiency. Implement reliable APIs and microservices that serve ML models to end users, ensuring alignment to standard methodologies in security, compliance, scalability, and maintainability. Stay ahead of emerging ML research, tools, and frameworks, evaluating and integrating new technologies such as sophisticated LLMs, reinforcement learning-based agents, and innovative inference optimization techniques. In the near future, we will expand the model coverage to include more creative tasks. We will also improve the model architectures to achieve better latency and accuracy. Additionally, we will integrate federated learning or domain adaptation strategies to reach diverse audiences. Basic Qualifications: PhD or Master’s or Bachelor’s in Computer Science or equivalent experience, ML, Applied Mathematics, Data Science, or a related technical field. 14+years of industry experience. Proficiency in Python and Java for ML model development and systems integration. Hands-on experience with deep learning frameworks, including TensorFlow and PyTorch. Demonstrated experience working with LLMs and agent frameworks to develop advanced AI-based experiences. Proficiency in computer vision and NLP techniques for multimodal content understanding and generation. Work experience in Creative Domains, Imaging Domains will be highly useful. Experience in developing and deploying RESTful web services and microservices architectures for applications involving ML. Proficiency with UNIX environments, Git for version control, and Jenkins for CI/CD processes. Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more about our vision here. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015.
Posted 3 weeks ago
3.0 years
1 - 5 Lacs
Hyderābād
On-site
Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Job Location: Hyderabad More details below: About the team: Join the growing team at Qualcomm focused on advancing state-of-the-art in Machine Learning. The team uses Qualcomm chips’ extensive heterogeneous computing capabilities to allow inference of trained neural networks on-device without a need for connection to the cloud. Our inference engine is designed to help developers run neural network models trained in a variety of frameworks on Snapdragon platforms at blazing speeds while still sipping the smallest amount of power. See your work directly impact billions of devices around the world. Responsibilities: In this position, you will be responsible for the development and commercialization of ML solutions like Snapdragon Neural Processing Engine (SNPE) SDK on Qualcomm SoCs. You will be developing various SW features in our ML stack. You would be porting AI/ML solutions to various platforms and optimize the performance on multiple hardware accelerators (like CPU/GPU/NPU). You will have expert knowledge in deployment aspects of large software C/C++ dependency stacks using best practices. You will also have to keep up with the fast-paced development happening in the industry and academia to continuously enhance our solution from software engineering as well as machine learning standpoint. Work Experience: 7-9 years of relevant work experience in software development. Live and breathe quality software development with excellent analytical and debugging skills. Strong understanding about Processor architecture, system design fundamentals. Experience with embedded systems development or equivalent. Strong development skills in C and C++. Excellent communication skills (verbal, presentation, written). Ability to collaborate across a globally diverse team and multiple interests. Preferred Qualifications Experience in embedded system development. Experience in C, C++, OOPS and Design patterns. Experience in Linux kernel or driver development is a plus. Strong OS concepts. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 3 weeks ago
5.0 years
4 - 10 Lacs
Hyderābād
Remote
Company Description It all started in sunny San Diego, California in 2004 when a visionary engineer, Fred Luddy, saw the potential to transform how we work. Fast forward to today — ServiceNow stands as a global market leader, bringing innovative AI-enhanced technology to over 8,100 customers, including 85% of the Fortune 500®. Our intelligent cloud-based platform seamlessly connects people, systems, and processes to empower organizations to find smarter, faster, and better ways to work. But this is just the beginning of our journey. Join us as we pursue our purpose to make the world work better for everyone. Job Description Team Overview: Join our pioneering Core-LLM platform team, dedicated to pushing the boundaries of Generative AI. We focus on developing robust, scalable, and safe machine learning models, particularly LLMs, SLMs, Large Reasoning Models (LRMs) and SRMs that power cutting-edge ServiceNow products and features. As a Senior Manager, you will lead a talented team of machine learning engineers, shaping the future of our AI capabilities and ensuring the ethical and effective deployment of our technology. What you get to do in this role: Generate and evaluate synthetic data tailored to improve the robustness, performance, and safety of machine learning models, particularly large language models (LLMs). Train and fine-tune models using curated datasets, optimizing for performance, reliability, and scalability. Design and implement evaluation metrics to rigorously measure and monitor model quality, safety, and effectiveness. Conduct experiments to validate model behavior and improve generalization across diverse use cases. Collaborate with engineering and research teams to identify risks and recommend AI safety mitigation strategies. Participate in the development, deployment, and continuous improvement of end-to-end AI solutions. Contribute to architectural and technology decisions related to AI infrastructure, frameworks, and tooling. Promote modern engineering practices including continuous integration, continuous delivery, and containerized workflows. Qualifications Key qualifications: Experience in using AI Productivity tools such as Cursor, Windsurf, etc. is a plus or nice to have Experience with methods of training and fine-tuning large language models, such as distillation, supervised fine-tuning, and policy optimization 5+ years of experience in machine learning, deep learning, and AI systems. Proficiency in Python and frameworks like PyTorch, TensorFlow, and NumPy. Experience in synthetic data generation, model training, and evaluation in real-world environments. Solid understanding of LLM fine-tuning, prompting, and robustness techniques. Knowledge of AI safety principles and experience identifying and mitigating model risks. Hands-on experience deploying and optimizing models using platforms such as Triton Inference Server. Familiarity with CI/CD, automated testing, and container orchestration tools like Docker and Kubernetes. Experience with prompt engineering: ability to craft, test, and optimize prompts for task accuracy and efficiency Additional Information Work Personas We approach our distributed world of work with flexibility and trust. Work personas (flexible, remote, or required in office) are categories that are assigned to ServiceNow employees depending on the nature of their work and their assigned work location. Learn more here. To determine eligibility for a work persona, ServiceNow may confirm the distance between your primary residence and the closest ServiceNow office using a third-party service. Equal Opportunity Employer ServiceNow is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, national origin or nationality, ancestry, age, disability, gender identity or expression, marital status, veteran status, or any other category protected by law. In addition, all qualified applicants with arrest or conviction records will be considered for employment in accordance with legal requirements. Accommodations We strive to create an accessible and inclusive experience for all candidates. If you require a reasonable accommodation to complete any part of the application process, or are unable to use this online application and need an alternative method to apply, please contact globaltalentss@servicenow.com for assistance. Export Control Regulations For positions requiring access to controlled technology subject to export control regulations, including the U.S. Export Administration Regulations (EAR), ServiceNow may be required to obtain export control approval from government authorities for certain individuals. All employment is contingent upon ServiceNow obtaining any export license or other approval that may be required by relevant export control authorities. From Fortune. ©2025 Fortune Media IP Limited. All rights reserved. Used under license.
Posted 3 weeks ago
0 years
2 - 4 Lacs
India
On-site
About the Role We are an ambitious technology company building innovative products at the intersection of artificial intelligence, accessibility, and digital content. We are seeking a skilled Machine Learning Engineer to join our team and drive the development of advanced machine learning systems across several high-impact applications. Responsibilities End-to-End ML Development: Design, train, and optimize deep learning models for computer vision, natural language processing, and multimodal data. Develop scalable pipelines for data collection, preprocessing, and augmentation. Implement real-time inference solutions suitable for web and mobile platforms. Research & Prototyping: Evaluate state-of-the-art models and adapt them to our product requirements. Build proof-of-concept systems to validate new ideas and features. Infrastructure & Deployment: Package and deploy models into production environments. Optimize models for latency and performance on resource-constrained devices. Collaboration: Work closely with cross-functional teams, including software engineers, designers, and product managers. Participate in code reviews, documentation, and knowledge sharing. Data Engineering & Infrastructure Build scalable data pipelines to collect, clean, and augment multimodal datasets (video, audio, text) Develop model deployment pipelines for mobile and web (TensorFlow Lite, ONNX, PyTorch Mobile) Integrate ML inference with our React Native, Python, and Node.js backends Applied Research & Prototyping Research state-of-the-art models (Transformers, Diffusion, GANs) and adapt them to Product use cases Develop proof-of-concepts and iterate rapidly with the product and design teams Contribute to model evaluation, A/B testing, and continuous improvement Must-Have Skills Experience in applied ML, deep learning, or computer vision Proficiency in Python, TensorFlow, PyTorch Experience with: CNNs and RNNs for video/sequence tasks NLP models (Transformers, BERT, GPT-style architectures) Real-time inference optimization (TensorRT, quantization, pruning) Strong understanding of data engineering workflows Familiarity with integrating ML into production systems (REST APIs, microservices) Nice-to-Have: Experience with LiDAR or 3D spatial data Familiarity with React Native and mobile ML deployment Contributions to open-source ML projects Publications or patents in ML/AI domains What We Offer: Opportunity to build technologies that fundamentally improve millions of lives A collaborative, purpose-driven environment Competitive salary and performance incentives Full ownership of end-to-end ML systems Access to cutting-edge infrastructure Job Type: Full-time Pay: ₹272,000.00 - ₹400,000.00 per year Benefits: Paid sick time Paid time off Provident Fund Schedule: Monday to Friday Supplemental Pay: Performance bonus Work Location: In person
Posted 3 weeks ago
1.0 years
5 - 9 Lacs
India
Remote
Job Title: AI Engineer – Computer Vision (Onsite/Remote, Full-Time) Location: Remote/Onsite – Chennai Duration: Full Time Experience: 1-5 years preferred Package: Up to 9LPA (Based on Experience) Role Overview: As an AI Engineer, you’ll lead the design, development, and deployment of advanced computer vision models for human motion understanding and video analytics. You'll work across the full lifecycle of AI development — from data pipelines to inference optimization — helping ship features that power our next-generation fitness intelligence systems. This is a high-impact, engineering-heavy role with a strong focus on real-world deployment, performance, and scale — not academic prototyping. Key Responsibilities: 1. Own the end-to-end lifecycle of computer vision models: from experimentation to deployment and monitoring. 2. Build and optimize deep learning models (YOLOv8, Vision Transformers, EfficientNet, SlowFast, I3D) for: i. ii. iii. Pose estimation and skeleton tracking Activity recognition and classification Posture correction and rep counting 3. Apply transfer learning, fine-tuning, and knowledge distillation techniques for performance and generalization. 4. Design scalable pipelines for video data ingestion, annotation, preprocessing, and augmentation. 5. Integrate AI modules into cloud-based (Azure/AWS) environments using REST APIs or microservices. 6. Optimize model inference using ONNX, TensorRT, and quantization strategies for real-time or edge deployment. 7. Collaborate cross-functionally with product, design, and frontend/backend teams to align on deliverables and timelines. 8. Stay up to date with the latest in vision research and evaluate new techniques for production integration. Required Skills: 1. Bachelor’s or Master’s degree in Computer Science, AI, Machine Learning, or related discipline. 2. 2+ years of experience building and deploying computer vision models in production environments. 3. Proficient in Python, with deep experience in PyTorch and/or TensorFlow. 4. Hands-on expertise in models such as: a. YOLOv5/YOLOv8 b. Vision Transformers (ViT, TimeSformer) o ResNet, EfficientNet o SlowFast, I3D, UNet 5. Experience working with pose estimation frameworks (Mediapipe, OpenPose, Detectron2). 6. Strong understanding of camera calibration, 3D geometry, and real-time motion tracking. 7. Experience integrating models into cloud environments (Azure, AWS) using APIs or containers. 8. Familiarity with tools such as OpenCV, MMAction2, DeepSort, and ONNX/TensorRT. Preferred Skills: Experience in real-time video analytics, edge AI, and system optimization. Familiarity with CI/CD pipelines, Docker, and monitoring tools for deployed models. Prior experience in the fitness or healthcare domain with time-series or movement data. Strong grasp of system-level design: latency tradeoffs, hardware acceleration, and scalability. What You’ll Gain A key role in shaping the next generation of intelligent fitness systems. Flexible work environment with learning and mentorship. Autonomy, ownership, and the opportunity to deploy models used by real users. A fast-paced environment with exposure to the full AI pipeline, from data to deployment. Collaborative team culture with a focus on engineering excellence and innovation. Job Type: Full-time Pay: ₹500,000.00 - ₹900,000.00 per year Benefits: Health insurance Schedule: Fixed shift Work Location: In person Expected Start Date: 15/07/2025
Posted 3 weeks ago
0.0 years
4 - 5 Lacs
Kānchipuram
On-site
Job Title: Cloud Operations Engineer Experience: 0–1 year- Entry Level Location: Kanchipuram, Tamil Nadu Employment Type: Full-time (Rotational Shifts – 24x7) About the Role We’re hiring a Cloud Operations Engineer to join our growing infrastructure team. In this role, you’ll be responsible for monitoring, maintaining, and responding to incidents in our production cloud environment. You will play a key role in ensuring uptime, performance, and reliability of cloud-based systems across compute, networking, and storage. This is an ideal opportunity for candidates interested in the operational side of cloud infrastructure, incident response, and systems reliability, especially those with a passion for Linux, monitoring tools, and automation. Your Responsibilities: ● Monitor health and performance of cloud infrastructure using tools like Prometheus, Grafana, ELK, and Zabbix. ● Perform L1–L2 troubleshooting of compute, network, and storage issues. ● Respond to infrastructure alerts and incidents with a sense of urgency and ownership. ● Execute standard operating procedures (SOPs) for issue mitigation and escalation. ● Contribute to writing and improving incident response playbooks and runbooks. ● Participate in root cause analysis (RCA) and post-incident reviews. ● Automate routine operations using scripting and Infrastructure-as-Code (IaC) tools. Technical Skills – Nice to Have (Not All Required) We don’t expect you to have experience in every area. If you’re eager to learn and have a solid foundation in Linux or cloud, you're encouraged to apply — even if you're still gaining experience in some areas below: ● Operating Systems: Linux (Debian/Ubuntu/CentOS/Rockylinux) ● Monitoring & Logging: Prometheus, Grafana, ELK, Zabbix, Nagios ● Infrastructure Troubleshooting Tools: top, htop, netstat, iostat, tcpdump ● Networking: DNS, NAT, VPN, Load Balancers ● Cloud Services: VM provisioning, disk management, firewall rules ● Automation & Scripting: Bash, Python, Git ● IaC Tools: Ansible, Terraform (good to have) ● Incident Response & RCA: Familiarity with escalation procedures and documentation best practices You Should Be Someone Who: ● Pays strong attention to detail and can respond under pressure ● Has solid analytical and troubleshooting skills ● Is comfortable working in shifts and taking ownership of incidents ● Communicates clearly and collaborates well with cross-functional teams ● Is eager to learn cloud automation, reliability, and monitoring practices What You’ll Gain ● Hands-on experience in live cloud infrastructure operations ● Expertise in monitoring tools, alert handling, and system troubleshooting ● Real-world experience with DevOps practices, SOPs, and RCA processes ● Exposure to automation and Infrastructure-as-Code workflows About the Company E2E Networks Ltd. https://www.e2enetworks.com/ E2E Networks Limited is amongst India’s fastest growing pureplay SSD Cloud players. E2E Networks is the 6th largest IAAS platform in India. E2E Networks High Performance cloud platform can be accessed via self service portal at https://myaccount.e2enetworks.com where you can provision/manage and monitor Linux/Windows/GPU Cloud Machines with high performance CPU, large memory(RAM) or Smart Dedicated Compute featuring dedicated CPU cores. We began in 2009 as a contractless computing player targeting the value-conscious segment of customers especially startups. Before there were hyperscalers in India we were the premier choice of many of today’s soonicorns/Unicorns/well established businesses for Cloud Infrastructure. E2E Networks Cloud was used by many of successfully scaledup startups like Zomato/Cardekho/Cars24/Healthkart/Junglee Games/1mg/Team-BHP/Instant Pay/WishFin/Algolia/Intrcity(RailYatri)/Clovia/Groupon India (later crazeal/nearbuy), Jabong and Tapzo and many more to scale during a significant part of their journey from startup stage to multi-million DAUs ( Daily Active Users). In 2018, E2E Networks Ltd issued its IPO through NSE Emerge. Investors rushed and oversubscribed 70 times to the IPO, making it a huge success. Today, E2E Networks is the largest NSE listed Cloud Provider having served more than 10,000 customers and having thousands of active customers Our self‐service public cloud platform enables rapid deployment of compute workloads. We provide Cloud Solutions via control panel or API, this includes CDN,Load Balancers,Firewalls,VPC,DBaaS,Reserved IPv4,Object Storage, DNS/rDNS, Continuous Data Protection, One Click Installations and many more features. This results in lower project delivery costs by cutting down the delivery timelines. Our collaboration with NVIDIA allows us to play a significant role in helping our customers run their AI/ML training/inference, data science, NLP and computer vision workload pipelines. Job Type: Full-time Pay: ₹404,243.54 - ₹567,396.46 per year Schedule: Rotational shift Application Question(s): Do you have any basic hands-on experience or knowledge of Cloud Computing and Linux? Work Location: In person Speak with the employer +91 9354505633
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France