Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
India
On-site
About Bipolar Factory At Bipolar Factory, we are reimagining the boundaries of technology and creativity through cutting-edge AI solutions. Our team is building tools and products powered by real-time intelligence, and we’re looking for a skilled Computer Vision Engineer to join us in shaping the future of visual AI. If you’re excited by deep learning, edge AI, real-time detection, and large-scale vision pipelines, this is your place. Key Responsibilities Model Development : Design, train, and deploy computer vision models for tasks such as object detection, image segmentation, classification, and tracking. Pipeline Building : Build scalable, efficient, and production-ready vision pipelines using deep learning frameworks. Experimentation : Run experiments with state-of-the-art architectures (e.g., YOLO, Faster R-CNN, SAM, Vision Transformers), fine-tune pre-trained models, and benchmark performance. Data Handling : Work with large datasets—curate, augment, clean, and label image/video data for training and validation. Collaboration : Work closely with the AI team, product managers, and backend developers to integrate vision models into production systems. Research-Driven Engineering : Stay current with research trends and bring academic advancements into practical use cases. Optimization : Optimize models for real-time inference, edge devices, or low-resource environments. What We’re Looking For 3–5 years of experience working in computer vision or AI engineering roles. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow. Solid understanding of image processing, CNNs, and modern computer vision techniques. Experience with OpenCV and vision-based data preprocessing. Familiarity with model deployment frameworks (e.g., ONNX, TensorRT, FastAPI). Ability to write clean, modular, and well-documented code. Strong analytical skills and a mindset for experimentation. Nice to Have Experience with video analytics, real-time processing, or edge deployment (e.g., Jetson, Raspberry Pi). Familiarity with generative models (GANs, Diffusion models). Contributions to open-source CV/ML projects or research publications. Experience with cloud-based training or deployment (AWS, GCP). Why Join Bipolar Factory? Work on high-impact, experimental AI products from the ground up A fast-moving, low-hierarchy environment where your work is seen and valued Flexible schedules, creative freedom, and deep tech problems to solve A passionate team that thrives on innovation and iteration
Posted 1 week ago
3.0 - 5.0 years
9 - 13 Lacs
Jaipur
Work from Office
Job Summary Were seeking a hands-on GenAI & Computer Vision Engineer with 3-5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model developmentfrom research and fine-tuning through deployment, monitoring, and iteration. In this role, youll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challengesaugmentation, domain adaptation, semi-supervised learningand mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally: Bachelors or Masters in Computer Science, Electrical Engineering, AI/ML, or a related field 3-5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges Youll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows
Posted 1 week ago
2.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
A hands-on engineering position responsible for designing, automating, and maintaining robust build systems and deployment pipelines for AI/ML components, with direct development responsibilities in C++ and Python. The role supports both model training infrastructure and high-performance inference systems. Design and implement robustbuild automation systemsthat support large, distributed AI/C++/Python codebases. Develop tools and scripts that enable developers and researchers to rapidly iterate, test, and deploy across diverse environments. Integrate C++ components with Python-based AI workflows, ensuring compatibility, performance, and maintainability. Lead the creation ofportable, reproducible development environments, ensuring parity between development and production. Maintain and extend CI/CD pipelines for Linux and z/OS, implementing best practices in automated testing, artifact management, and release validation. Collaborate with cross-functional teams — including AI researchers, system architects, and mainframe engineers — to align infrastructure with strategic goals. Proactively monitor and improve build performance, automation coverage, and system reliability. Contribute to internal documentation, process improvements, and knowledge sharing to scale your impact across teams. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Strong programming skills in C++ and Python, with a deep understanding of both compiled and interpreted language paradigms. Hands-on experience building and maintainingcomplex automation pipelines(CI/CD) using tools likeJenkins, or GitLab CI. In-depth experience withbuild tools and systemssuch asCMake, Make, Meson, or Ninja, including custom script development and cross-compilation. Experience working onmulti-platform development, specifically onLinux and IBM z/OSenvironments, including understanding of their respective toolchains and constraints. Experience integratingnative C++ code with Python, leveragingpybind11,Cython, or similar tools for high-performance interoperability. Proven ability to troubleshoot and resolvebuild-time, runtime, and integration issuesin large-scale, multi-component systems. Comfortable withshell scripting(Bash, Zsh, etc.) and system-level operations. Familiarity withcontainerization technologieslike Docker for development and deployment environments. Preferred technical and professional experience Working knowledge of AI/ML frameworks such as PyTorch, TensorFlow, or ONNX, including understanding of how they integrate into production environments. Experience developing or maintaining software on IBM z/OS mainframe systems. Familiarity with z/OS build and packaging workflows, Understanding of system performance tuning, especially in high-throughput compute or I/O environments (e.g., large model training or inference). Knowledge of GPU computing and low-level profiling/debugging tools. Experience managing long-lifecycle enterprise systems and ensuring compatibility across releases and deployments. Background contributing to or maintaining open-source projects in the infrastructure, DevOps, or AI tooling space Proficiency in distributed systems, microservice architecture, and REST APIs. Experience in collaborating with cross-functional teams to integrate MLOps pipelines with CI/CD tools for continuous integration and deployment, ensuring seamless integration of AI/ML models into production workflows. Strong communication skills with the ability to communicate technical concepts effectively to non-technical stakeholders. Demonstrated excellence in interpersonal skills, fostering collaboration across diverse teams. Proven track record of ensuring compliance with industry best practices and standards in AI engineering. Maintained high standards of code quality, performance, and security in AI projects.
Posted 1 week ago
2.0 - 5.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Provide technical leadership in the design, development, and maintenance of scalable build systems and deployment pipelines for AI/ML components, setting standards for quality, reliability, and performance. Mentor and guide a team of engineers, promoting best practices in C++, Python, CI/CD, and infrastructure automation. Design and implement robust build automation systems that support large, distributed AI/C++/Python codebases. Develop tools and scripts to enable developers and researchers to rapidly iterate, test, and deploy across diverse environments. Integrate C++ components with Python-based AI workflows, ensuring compatibility, performance, and maintainability. Lead the creation of portable, reproducible development environments, ensuring parity between development and production systems. Maintain and extend CI/CD pipelines for Linux and z/OS, applying best practices in automated testing, artifact management, and release validation. Collaborate with cross-functional teams — including AI researchers, system architects, and mainframe engineers — to align infrastructure with strategic and technical goals. Proactively monitor and improve build performance, automation coverage, and system reliability, identifying opportunities for innovation and optimization. Contribute to internal documentation, process improvements, and knowledge sharing to scale impact across teams and foster a culture of continuous improvement. Required education Bachelor's Degree Preferred education Bachelor's Degree Required technical and professional expertise Expert-level programming skills in C++ and Python, with a strong grasp of both compiled and interpreted language paradigms; able to provide architectural guidance and code-level mentorship. Demonstrated leadership in building and maintaining complex automation pipelines (CI/CD) using tools like Jenkins or GitLab CI, including the ability to define strategy, review team contributions, and drive implementation. In-depth experience with build tools and systems such asCMake, Make, Meson, or Ninja, including development of custom scripts and support for cross-compilation in heterogeneous environments. Proven experience leadingmulti-platform development efforts, particularly onLinux and IBM z/OS, with a deep understanding of platform-specific toolchains, constraints, and performance considerations. Expertise inintegrating native C++ code with Pythonusing tools like pybind11 or Cython, ensuring high-performance and maintainable interoperability across language boundaries. Strong diagnostic and debugging skills, with the ability to lead teams in resolving build-time, runtime, and integration issues in large-scale, multi-component systems. Proficiency inshell scripting (e.g., Bash, Zsh)and system-level operations, with the ability to coach others in scripting best practices. Familiarity withcontainerization technologies like Docker, and a track record of leading the adoption or optimization of container-based development and deployment workflows. Excellent communication and collaboration skills, with the ability to coordinate across disciplines, align technical efforts with strategic goals, and foster a high-performing engineering culture. Preferred technical and professional experience Working knowledge of AI/ML frameworks such as PyTorch, TensorFlow, or ONNX, with an understanding of how to integrate them into scalable, production-grade environments,able to guide teams in best practices for deployment and optimization. Experience developing or maintaining software onIBM z/OS mainframe systems, with the ability to mentor others in navigating legacy-modern hybrid ecosystems. Familiarity withz/OS build and packaging workflows, including leading efforts to streamline and modernize tooling where appropriate. Solid understanding ofsystem performance tuningin high-throughput compute and I/O environments (e.g., large-scale model training or inference pipelines), and the ability to direct optimization strategies. Knowledge ofGPU computing and low-level profiling/debugging tools, with experience driving performance-critical initiatives. Experience managinglong-lifecycle enterprise systems, ensuring forward- and backward-compatibility across releases and deployments through proactive planning and versioning strategies. Background contributing to or maintainingopen-source projectsin infrastructure, DevOps, or AI tooling domains, with a focus on community engagement and sustainability. Proficiency indistributed systems, microservice architectures, and REST APIs, including guiding architectural decisions that balance performance, maintainability, and scalability. Proven experience leadingintegration of MLOps pipelines with CI/CD frameworks, ensuring seamless, secure, and automated deployment of AI/ML models into production workflows. Exceptional communication and stakeholder management skills, capable of clearly articulating technical strategies and trade-offs to non-technical audiences. Demonstrated ability to fostercollaboration and alignment across diverse, cross-functional teams, including AI researchers, DevOps engineers, and enterprise architects. Track record of ensuringcompliance with industry standards, security policies, and best practicesin enterprise-scale AI engineering. Commitment tomaintaining high standards of code quality, performance, and security, with the ability to review and enforce standards across a team or organization.
Posted 1 week ago
11.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking an experienced AI/ML Engineer to design and implement cutting-edge solutions in the field of Generative AI and Large Language Models (LLMs). This role involves leading the development of intelligent agents, prompt optimization, and scalable AI pipelines, while mentoring team members and driving innovation in applied AI. You will work at the intersection of machine learning, prompt engineering, and production-grade AI orchestration, contributing to the deployment of next-gen cognitive systems across telecom and enterprise environments. Key Responsibilities: AI/ML Development: • Design and develop production-ready AI/ML solutions using LLMs and Generative AI models. • Fine-tune and optimize foundation models for domain-specific use cases. • Architect and deploy Retrieval-Augmented Generation (RAG) pipelines and multi-agent systems. Agentic AI & Prompt Engineering: • Build intelligent AI agents using frameworks such as LangChain, CrewAI, AutoGen. • Engineer effective prompts for complex language tasks and continuous learning environments. • Orchestrate autonomous decision-making pipelines using agent-based design. Model Lifecycle & MLOps: • Implement best practices for responsible AI, model explainability, and fairness. • Own model deployment, monitoring, optimization, and cost control. • Work with MLOps tools for automation and CI/CD in model delivery pipelines. Data Engineering & Preparation: • Handle advanced data preprocessing, feature engineering, and quality checks. • Leverage tools like Pandas, Numpy, Polars for exploration and transformation. • Integrate with vector databases such as Pinecone, ChromaDB, Weaviate for semantic search applications. What You’ll Bring: • 7–11 years of experience as an AI/ML Engineer or Data Scientist with a proven track record in delivering ML solutions to production. • Expertise in: - ML/DL frameworks: TensorFlow, PyTorch, Keras, Sklearn - LLM ecosystems: LangChain, LlamaIndex, CrewAI - Foundation models, RAG pipelines, and agentic AI systems • Strong programming in Python (R is a plus) • Deep knowledge of: - ML/DL algorithms and probabilistic modeling - Statistics, feature engineering, and data wrangling - AI governance and ethical development practices • Hands-on experience in: - Prompt engineering and LLM fine-tuning - Vector databases and memory integration - Model performance tuning, compression, and deployment (ONNX, TorchScript, quantization) Why Join Us? • Impactful Work: Play a key role in building scalable AI solutions that power real-time telecom, messaging, and enterprise platforms. • Tremendous Growth Opportunities: Be part of a dynamic, fast-scaling AI team solving real-world problems. • Innovative Environment: Collaborate with passionate engineers and researchers in a culture that celebrates experimentation, learning, and innovation. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive workplace for all.
Posted 1 week ago
2.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Applied Scientist – Computational Vision (Vision LLMs & Video Intelligence) Company Overview: Accrete AI is a dynamic and innovative company focused on transforming the future of artificial intelligence. We specialize in creating advanced AI solutions that turn complex data into actionable insights, driving real-world impact for businesses and government organizations. Our team thrives on creativity and collaboration, working together to push the boundaries of AI technology. At the core of our offerings are AI agents—autonomous systems that analyze multimodal data, generate insights, and make intelligent recommendations. These agents help businesses streamline operations, improve decision-making, and empower government entities to enhance security, intelligence, and operational efficiency. Job Description: We are seeking a highly-motivated and innovative Applied Scientist to join our research team in Mumbai, India to drive cutting-edge research and development. In this role, you will spearhead the development of intelligent computer vision solutions leveraging real-time video data, vision language models (VLLMSs), and advanced architectures. Your work will focus on solving complex real-world problems using multimodal learning, video understanding, self-supervised techniques, and LLM-enhanced vision models. You will have the opportunity to work at the intersection of vision, language, and reasoning, driving research and innovation from ideation through deployment. Key Responsibilities: Research and build state-of-the-art computer vision systems with a focus on real-time video analytics, video summarization, object tracking, and activity recognition. Develop and apply Vision-Language Models (VLMs) and multimodal transformer architectures for deep semantic understanding of visual content. Apply self-supervised, zero-shot, and few-shot learning techniques to enhance model generalization across varied video domains. Explore and optimize LLM prompting strategies and cross-modal alignment methods for improved reasoning over vision data. Contribute to research publications, patents, and internal IP assets in the area of vision and multimodal AI. Required Qualifications: Masters in Computer Science, Computer Vision, Machine Learning, or a related discipline with 2+ years of experience leading applied research or product-focused CV/ML projects. Expertise in modern computer vision architectures (e.g., ViT, SAM, CLIP, BLIP, DETR, or similar). Experience with Vision-Language Models (VLMs) and multimodal AI systems. Strong background in real-time video analysis, including event detection, motion analysis, and temporal reasoning. Experience with transformer-based architectures, multimodal embeddings, and LLM-vision integrations. Proficiency in Python and deep learning libraries like PyTorch or TensorFlow, OpenCV Experience with cloud platforms (AWS, Azure) and deployment frameworks (ONNX, TensorRT) is a plus. Strong problem-solving skills, with a track record of end-to-end ownership of applied ML/CV projects. Excellent communication and collaboration skills, with the ability to work in cross-functional teams. Why Join Us: Innovative Environment: Be part of a team that's at the forefront of technological innovation, utilizing GenAI and ML to create groundbreaking solutions. Collaborative Culture: Work in a collaborative environment where your ideas are valued, and you have the opportunity to make a real impact. We provide a flexible work environment. Professional Growth: We're committed to your professional growth and development, offering opportunities for learning and advancement. Competitive Compensation: Enjoy a competitive compensation and benefits package, including medical insurance. We offer a competitive salary, benefits package, and opportunities for growth and advancement within the company. If you are an innovative and results-driven leader, we encourage you to apply for this exciting opportunity. Accrete is an equal opportunity/affirmative action employer. All qualified applicants will receive consideration for employment without regard to sex, gender identity, sexual orientation, race, color, religion, national origin, disability, protected Veteran status, age, or any other characteristic protected by law.
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a high-impact AI/ML Engineer, you will lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You will be an integral part of a fast-paced, outcome-oriented AI & Analytics team, collaborating with data scientists, engineers, and product leaders to translate business use cases into real-time, scalable AI systems. Your responsibilities in this role will include architecting, developing, and deploying ML models for multimodal problems encompassing vision, audio, and NLP tasks. You will be responsible for the complete ML lifecycle, from data ingestion to model development, experimentation, evaluation, deployment, and monitoring. Leveraging transfer learning and self-supervised approaches where appropriate, you will design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborating with MLOps, data engineering, and DevOps teams, you will operationalize models using technologies such as Docker, Kubernetes, or serverless infrastructure. Continuously monitoring model performance and implementing retraining workflows to ensure sustained accuracy over time will be a key aspect of your role. You will stay informed about cutting-edge AI research and incorporate innovations such as generative AI, video understanding, and audio embeddings into production systems. Writing clean, well-documented, and reusable code to support agile experimentation and long-term platform development is an essential part of this position. To qualify for this role, you should hold a Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field, with a minimum of 5-8 years of experience in AI/ML Engineering, including at least 3 years in applied deep learning. In terms of technical skills, you should be proficient in Python, with knowledge of R or Java being a plus. Additionally, you should have expertise in ML/DL Frameworks like PyTorch, TensorFlow, and Scikit-learn, as well as experience in Computer Vision tasks such as image classification, object detection, OCR, segmentation, and tracking. Familiarity with Audio AI tasks like speech recognition, sound classification, and audio embedding models is also desirable. Strong capabilities in Data Engineering using tools like Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data are required. Knowledge of NLP/LLMs, Cloud & MLOps services, deployment & infrastructure technologies, and CI/CD & Version Control tools are also beneficial. Soft skills and competencies that will be valuable in this role include strong analytical and systems thinking, effective communication skills to convey models and results to non-technical stakeholders, the ability to work cross-functionally with various teams, and a demonstrated bias for action, rapid experimentation, and iterative delivery of impact.,
Posted 1 week ago
2.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Position AI ML Engineer Job Description 2 to 5+ years of experience in understanding the problem statement, data handling/triage, selecting and improving neural network models using deep learning frameworks to solve business problems. Excellent hands-on Coding in Python (mandatory), C/C++ or Java. Experience in developing AI/ML/DL models by using transfer learning or from scratch. Reasonably good knowledge on leading deep learning frameworks like Tensorflow, PyTorch, ONNX, Keras and others. Working experience in computer vision models like Yolo, Mobilenet, Resnet etc. Good understanding and working knowledge of AI/ML/DL on Edge (for e.g. Nvidia Jetson family, Qualcomm, Intel, Rpi). Thorough understanding and experience of DL/AI/ML lifecycle - full neural network pipeline, starting from data collection to model building to experimental framework to data analytics. Developed/optimized various models in computing domain (Video, Statistics , Audio and others). Demonstrated experience in completing data science projects with or without minimal supervision. Must possess conceptual understanding of various modelling techniques, pros and cons of each technique. Added Advantage Proven record of migrating Al/ML/DL models/algorithms to low level platforms. Familiar with optimizing code for minimal usage of CPU and memory. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services
Posted 1 week ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position AI ML Engineer Job Description 2 to 5+ years of experience in understanding the problem statement, data handling/triage, selecting and improving neural network models using deep learning frameworks to solve business problems. Excellent hands-on Coding in Python (mandatory), C/C++ or Java. Experience in developing AI/ML/DL models by using transfer learning or from scratch. Reasonably good knowledge on leading deep learning frameworks like Tensorflow, PyTorch, ONNX, Keras and others. Working experience in computer vision models like Yolo, Mobilenet, Resnet etc. Good understanding and working knowledge of AI/ML/DL on Edge (for e.g. Nvidia Jetson family, Qualcomm, Intel, Rpi). Thorough understanding and experience of DL/AI/ML lifecycle - full neural network pipeline, starting from data collection to model building to experimental framework to data analytics. Developed/optimized various models in computing domain (Video, Statistics , Audio and others). Demonstrated experience in completing data science projects with or without minimal supervision. Must possess conceptual understanding of various modelling techniques, pros and cons of each technique. Added Advantage Proven record of migrating Al/ML/DL models/algorithms to low level platforms. Familiar with optimizing code for minimal usage of CPU and memory. Location: IN-GJ-Ahmedabad, India-Ognaj (eInfochips) Time Type Full time Job Category Engineering Services
Posted 1 week ago
2.0 - 7.0 years
10 - 15 Lacs
Noida
Work from Office
We are seeking a skilled and passionate .NET AI & ML Developer to join our technology team. The ideal candidate will have strong experience in .NET/.NET Core development with a focus on integrating Artificial Intelligence (AI) and Machine Learning (ML) solutions into real-world applications. You will work closely with cross-functional teams to develop, train, and deploy AI/ML models using technologies like OpenCvSharp, ONNX Runtime, and C#. Key Responsibilities: Design, develop, and maintain .NET/.NET Core applications with AI & ML capabilities. Build, train, and deploy machine learning models using ONNX Runtime and other modern frameworks. Work with OpenCvSharp to integrate image and video processing into .NET-based applications. Apply ML principles in software development, including model serialization, inference, and optimization. Collaborate with data scientists, software developers, and product managers to deliver intelligent features. Ensure high-quality code that is maintainable, scalable, and aligned with best practices. Implement dependency injection, modular architecture,and clean code principles. Participate in code reviews, architecture discussions, and continuous improvement efforts. Required Skills & Qualifications: Proven experience in .NET Framework and .NET Core development. Proficiency in C# programming language. Solid understanding of AI/ML fundamentals, model training, deployment, and serialization. Experience with OpenCvSharp or similar .NET image processing libraries. Strong knowledge of ONNX Runtime and deploying machine learning models in production. Experience integrating ML models into real-time or offline applications. Understanding of dependency injection and modern architectural patterns. Ability to write clean, efficient, and maintainable code. Excellent problem-solving, analytical, and communication skills. Preferred Qualifications: Bachelors or Master’s degree in Computer Science, Engineering, AI/ML, or a related field. Experience working in Agile/Scrum developmentenvironments. Exposure to cloud platforms like Azure or AWS for ML model deployment. Familiarity with RESTful APIs and microservices architecture.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description 💰 Compensation Note: The budget for this role is fixed at INR 50–55 lakhs per annum (non-negotiable). Please ensure this aligns with your expectations before applying. 📍 Work Setup: This is a hybrid role , requiring 3 days per week onsite at the office in Hyderabad, Bangalore or Pune, India . Company Description: Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. Job Description : We are looking for an AI Engineer with experience in Speech-to-text and Text Generation to solve a Conversational AI challenge for our client based in EMEA. The focus of this project is to transcribe conversations and leverage generative AI-powered text analytics to drive better engagement strategies and decision-making. The ideal candidate will have deep expertise in Speech-to-Text (STT), Natural Language Processing (NLP), Large Language Models (LLMs), and Conversational AI systems. This role involves working on real-time transcription, intent analysis, sentiment analysis, summarization, and decision-support tools. Key Responsibilities: Conversational AI & Call Transcription Development Develop and fine-tune automatic speech recognition (ASR) models Implement language model fine-tuning for industry-specific language. Develop speaker diarization techniques to distinguish speakers in multi-speaker conversations. NLP & Generative AI Applications Build summarization models to extract key insights from conversations. Implement Named Entity Recognition (NER) to identify key topics. Apply LLMs for conversation analytics and context-aware recommendations. Design custom RAG (Retrieval-Augmented Generation) pipelines to enrich call summaries with external knowledge. Sentiment Analysis & Decision Support Develop sentiment and intent classification models. Create predictive models that suggest next-best actions based on call content, engagement levels, and historical data. AI Deployment & Scalability Deploy AI models using tools like AWS, GCP, Azure AI, ensuring scalability and real-time processing. Optimize inference pipelines using ONNX, TensorRT, or Triton for cost-effective model serving. Implement MLOps workflows to continuously improve model performance with new call data. Qualifications: Technical Skills Strong experience in Speech-to-Text (ASR), NLP, and Conversational AI. Hands-on expertise with tools like Whisper, DeepSpeech, Kaldi, AWS Transcribe, Google Speech-to-Text. Proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers. Experience with LLM fine-tuning, RAG-based architectures, and LangChain. Hands-on experience with Vector Databases (FAISS, Pinecone, Weaviate, ChromaDB) for knowledge retrieval. Experience deploying AI models using Docker, Kubernetes, FastAPI, Flask. Soft Skills Ability to translate AI insights into business impact. Strong problem-solving skills and ability to work in a fast-paced AI-first environment. Excellent communication skills to collaborate with cross-functional teams, including data scientists, engineers, and client stakeholders. Preferred Qualifications Experience in healthcare, pharma, or life sciences NLP use cases. Background in knowledge graphs, prompt engineering, and multimodal AI. Experience with Reinforcement Learning (RLHF) for improving conversation models.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description 💰 Compensation Note: The budget for this role is fixed at INR 50–55 lakhs per annum (non-negotiable). Please ensure this aligns with your expectations before applying. 📍 Work Setup: This is a hybrid role , requiring 3 days per week onsite at the office in Hyderabad, Bangalore or Pune, India . Company Description: Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. Job Description : We are looking for an AI Engineer with experience in Speech-to-text and Text Generation to solve a Conversational AI challenge for our client based in EMEA. The focus of this project is to transcribe conversations and leverage generative AI-powered text analytics to drive better engagement strategies and decision-making. The ideal candidate will have deep expertise in Speech-to-Text (STT), Natural Language Processing (NLP), Large Language Models (LLMs), and Conversational AI systems. This role involves working on real-time transcription, intent analysis, sentiment analysis, summarization, and decision-support tools. Key Responsibilities: Conversational AI & Call Transcription Development Develop and fine-tune automatic speech recognition (ASR) models Implement language model fine-tuning for industry-specific language. Develop speaker diarization techniques to distinguish speakers in multi-speaker conversations. NLP & Generative AI Applications Build summarization models to extract key insights from conversations. Implement Named Entity Recognition (NER) to identify key topics. Apply LLMs for conversation analytics and context-aware recommendations. Design custom RAG (Retrieval-Augmented Generation) pipelines to enrich call summaries with external knowledge. Sentiment Analysis & Decision Support Develop sentiment and intent classification models. Create predictive models that suggest next-best actions based on call content, engagement levels, and historical data. AI Deployment & Scalability Deploy AI models using tools like AWS, GCP, Azure AI, ensuring scalability and real-time processing. Optimize inference pipelines using ONNX, TensorRT, or Triton for cost-effective model serving. Implement MLOps workflows to continuously improve model performance with new call data. Qualifications: Technical Skills Strong experience in Speech-to-Text (ASR), NLP, and Conversational AI. Hands-on expertise with tools like Whisper, DeepSpeech, Kaldi, AWS Transcribe, Google Speech-to-Text. Proficiency in Python, PyTorch, TensorFlow, Hugging Face Transformers. Experience with LLM fine-tuning, RAG-based architectures, and LangChain. Hands-on experience with Vector Databases (FAISS, Pinecone, Weaviate, ChromaDB) for knowledge retrieval. Experience deploying AI models using Docker, Kubernetes, FastAPI, Flask. Soft Skills Ability to translate AI insights into business impact. Strong problem-solving skills and ability to work in a fast-paced AI-first environment. Excellent communication skills to collaborate with cross-functional teams, including data scientists, engineers, and client stakeholders. Preferred Qualifications Experience in healthcare, pharma, or life sciences NLP use cases. Background in knowledge graphs, prompt engineering, and multimodal AI. Experience with Reinforcement Learning (RLHF) for improving conversation models.
Posted 1 week ago
5.0 - 10.0 years
20 - 35 Lacs
Bengaluru
Work from Office
ONNX implementation and optimization in AIX: Strong application developer with deep knowledge of compiler behaviour when implementing numerically intensive AI algorithms. Understanding how to vectorize and optimize and communicate the benefits and behaviour of the optimized code. Requires knowledge of algorithms used in mathematical modelling, simulation, machine learning, and particularly ONNX. Requires demonstrated experience implementing these algorithms in applications that require robustness and performance. The job will require an understanding of analysing performance and data handling issues such as efficient handling of endianness formats to achieve the best possible performance. This is to be accomplished using new algorithms, advanced processor features and leveraging the features through advanced compiler optimization features and libraries. Candidate will have broad awareness of how to implement algorithms to deliver performance gain and consistency of the applications requirement. Required skills Development experience with the numeric algorithms used in mathematical modelling, simulation, machine learning, and particularly ONNX Experience with C and C++ application programming using one or more of these compilers: GCC, XL C, ICC, CLANG/LLVM, AOCC Experience applying numeric algorithms into complex multi threaded multiprocessing applications in UNIX or Linux OS Experience debugging runtime and runtime issues in large scale projects Familiarity with Python based coding Familiarity Java Development Kit(JDK) and Java Virtual Machine (JVM) Preferred skills Open-source contributions, system programming, networking (distributed/parallel applications) Application performance optimization investigation & analysis using tools like valgrind, perf, Nectar, PMU, pipestat, nmon
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: Senior CV Engineer Location: Gurugram Experience: 6–10 Years CTC: Up to ₹ 60LPA Industry: AI Product Overview: We are hiring for our esteemed client, a Series-A funded deep-tech company building a first-of-its-kind app-based operating system for Computer Vision. The team specializes in real-time video/image inference, distributed processing, and high-throughput data handling using advanced technologies and frameworks. Key Responsibilities: Lead design and implementation of complex CV pipelines (object detection, instance segmentation, industrial anomaly detection). Own major modules from concept to deployment ensuring low latency and high reliability. Transition algorithms from Python/PyTorch to optimized C++ edge GPU implementations using TensorRT, ONNX, and GStreamer. Collaborate with cross-functional teams to refine technical strategies and roadmaps. Drive long-term data and model strategies (synthetic data generation, validation frameworks). Mentor engineers and maintain high engineering standards. Required Skills & Qualifications: 6–10 years of experience in architecting and deploying CV systems. Expertise in multi-object tracking, object detection, semi/unsupervised learning. Proficiency in Python, PyTorch/TensorFlow, Modern C++, CUDA. Experience with real-time, low-latency model deployment on edge devices. Strong systems-level design thinking across ML lifecycles. Familiarity with MLOps (CI/CD for models, versioning, experiment tracking). Bachelor’s/Master’s degree in CS, EE, or related fields with strong ML and algorithmic foundations. (Preferred) Experience with NVIDIA DeepStream, GStreamer, LLMs/VLMs, open-source contributions.
Posted 1 week ago
3.0 - 6.0 years
4 Lacs
India
On-site
About MostEdge MostEdge empowers retailers with smart, trusted, and sustainable solutions to run their stores more efficiently. Through our Inventory Management Service, powered by the StockUPC app , we provide accurate, real-time insights that help stores track inventory, prevent shrink, and make smarter buying decisions. Our mission is to deliver trusted, profitable experiences—empowering retailers, partners and employees to accelerate commerce in a sustainable manner. Job Summary: We are seeking a highly skilled and motivated AI/ML Engineer with a specialization in Computer Vision & Un-Supervised Learning to join our growing team. You will be responsible for building, optimizing, and deploying advanced video analytics solutions for smart surveillance applications , including real-time detection, facial recognition, and activity analysis. This role combines the core competencies of AI/ML modelling with the practical skills required to deploy and scale models in real-world production environments , both in the cloud and on edge devices . Key Responsibilities: AI/ML Development & Computer Vision Design, train, and evaluate models for: Face detection and recognition Object/person detection and tracking Intrusion and anomaly detection Human activity or pose recognition/estimation Work with models such as YOLOv8, DeepSORT, RetinaNet, Faster-RCNN, and InsightFace. Perform data preprocessing, augmentation, and annotation using tools like LabelImg, CVAT, or custom pipelines. Surveillance System Integration Integrate computer vision models with live CCTV/RTSP streams for real-time analytics. Develop components for motion detection , zone-based event alerts , person re-identification , and multi-camera coordination . Optimize solutions for low-latency inference on edge devices (Jetson Nano, Xavier, Intel Movidius, Coral TPU). Model Optimization & Deployment Convert and optimize trained models using ONNX , TensorRT , or OpenVINO for real-time inference. Build and deploy APIs using FastAPI , Flask , or TorchServe . Package applications using Docker and orchestrate deployments with Kubernetes . Automate model deployment workflows using CI/CD pipelines (GitHub Actions, Jenkins). Monitor model performance in production using Prometheus , Grafana , and log management tools. Manage model versioning, rollback strategies, and experiment tracking using MLflow or DVC . As an AI/ML Engineer, you should be well-versed of AI agent development and finetuning experience Collaboration & Documentation Work closely with backend developers, hardware engineers, and DevOps teams. Maintain clear documentation of ML pipelines, training results, and deployment practices. Stay current with emerging research and innovations in AI vision and MLOps. Required Qualifications: Bachelor’s or master’s degree in computer science, Artificial Intelligence, Data Science, or a related field. 3–6 years of experience in AI/ML, with a strong portfolio in computer vision, Machine Learning . Hands-on experience with: Deep learning frameworks: PyTorch, TensorFlow Image/video processing: OpenCV, NumPy Detection and tracking frameworks: YOLOv8, DeepSORT, RetinaNet Solid understanding of deep learning architectures (CNNs, Transformers, Siamese Networks). Proven experience with real-time model deployment on cloud or edge environments. Strong Python programming skills and familiarity with Git, REST APIs, and DevOps tools. Preferred Qualifications: Experience with multi-camera synchronization and NVR/DVR systems. Familiarity with ONVIF protocols and camera SDKs. Experience deploying AI models on Jetson Nano/Xavier , Intel NCS2 , or Coral Edge TPU . Background in face recognition systems (e.g., InsightFace, FaceNet, Dlib). Understanding of security protocols and compliance in surveillance systems. Tools & Technologies: Languages & AI - Python, PyTorch, TensorFlow, OpenCV, NumPy, Scikit-learn Model Serving - FastAPI, Flask, TorchServe, TensorFlow Serving, REST/gRPC APIs Model Optimization - ONNX, TensorRT, OpenVINO, Pruning, Quantization Deployment - Docker, Kubernetes, Gunicorn, MLflow, DVC CI/CD & DevOps - GitHub Actions, Jenkins, GitLab CI Cloud & Edge - AWS SageMaker, Azure ML, GCP AI Platform, Jetson, Movidius, Coral TPU Monitoring - Prometheus, Grafana, ELK Stack, Sentry Annotation Tools - LabelImg, CVAT, Supervisely Benefits: Competitive compensation and performance-linked incentives. Work on cutting-edge surveillance and AI projects. Friendly and innovative work culture. Job Types: Full-time, Permanent Pay: From ₹400,000.00 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Monday to Friday Morning shift Night shift Rotational shift US shift Weekend availability Supplemental Pay: Performance bonus Quarterly bonus Work Location: In person Application Deadline: 25/07/2025 Expected Start Date: 01/08/2025
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
AI/ML Engineer – Core Algorithm and Model Expert 1. Role Objective: The engineer will be responsible for designing, developing, and optimizing advanced AI/ML models for computer vision, generative AI, Audio processing, predictive analysis and NLP applications. Must possess deep expertise in algorithm development and model deployment as production-ready products for naval applications. Also responsible for ensuring models are modular, reusable, and deployable in resource constrained environments. 2. Key Responsibilities: 2.1. Design and train models using Naval-specific data and deliver them in the form of end products 2.2. Fine-tune open-source LLMs (e.g. LLaMA, Qwen, Mistral, Whisper, Wav2Vec, Conformer models) for Navy-specific tasks. 2.3. Preprocess, label, and augment datasets. 2.4. Implement quantization, pruning, and compression for deployment-ready AI applications. 2.5. The engineer will be responsible for the development, training, fine-tuning, and optimization of Large Language Models (LLMs) and translation models for mission-critical AI applications of the Indian Navy. The candidate must possess a strong foundation in transformer-based architectures (e.g., BERT, GPT, LLaMA, mT5, NLLB) and hands-on experience with pretraining and fine-tuning methodologies such as Supervised Fine-Tuning (SFT), Instruction Tuning, Reinforcement Learning from Human Feedback (RLHF), and Parameter-Efficient Fine-Tuning (LoRA, QLoRA, Adapters). 2.6. Proficiency in building multilingual and domain-specific translation systems using techniques like backtranslation, domain adaptation, and knowledge distillation is essential. 2.7. The engineer should demonstrate practical expertise with libraries such as Hugging Face Transformers, PEFT, Fairseq, and OpenNMT. Knowledge of model compression, quantization, and deployment on GPU-enabled servers is highly desirable. Familiarity with MLOps, version control using Git, and cross-team integration practices is expected to ensure seamless interoperability with other AI modules. 2.8. Collaborate with Backend Engineer for integration via standard formats (ONNX, TorchScript). 2.9. Generate reusable inference modules that can be plugged into microservices or edge devices. 2.10. Maintain reproducible pipelines (e.g., with MLFlow, DVC, Weights & Biases). 3. Educational Qualifications Essential Requirements: 3.1. B Tech / M.Tech in Computer Science, AI/ML, Data Science, Statistics or related field with exceptional academic record. 3.2. Minimum 75% marks or 8.0 CGPA in relevant engineering disciplines. Desired Specialized Certifications: 3.3. Professional ML certifications from Google, AWS, Microsoft, or NVIDIA 3.4. Deep Learning Specialization. 3.5. Computer Vision or NLP specialization certificates. 3.6. TensorFlow/ PyTorch Professional Certification. 4. Core Skills & Tools: 4.1. Languages: Python (must), C++/Rust. 4.2. Frameworks: PyTorch, TensorFlow, Hugging Face Transformers. 4.3. ML Concepts: Transfer learning, RAG, XAI (SHAP/LIME), reinforcement learning LLM finetuning, SFT, RLHF, LoRA, QLorA and PEFT. 4.4. Optimized Inference: ONNX Runtime, TensorRT, TorchScript. 4.5. Data Tooling: Pandas, NumPy, Scikit-learn, OpenCV. 4.6. Security Awareness: Data sanitization, adversarial robustness, model watermarking. 5. Core AI/ML Competencies: 5.1. Deep Learning Architectures: CNNs, RNNs, LSTMs, GRUs, Transformers, GANs, VAEs, Diffusion Models 5.2. Computer Vision: Object detection (YOLO, R-CNN), semantic segmentation, image classification, optical character recognition, facial recognition, anomaly detection. 5.3. Natural Language Processing: BERT, GPT models, sentiment analysis, named entity recognition, machine translation, text summarization, chatbot development. 5.4. Generative AI: Large Language Models (LLMs), prompt engineering, fine-tuning, Quantization, RAG systems, multimodal AI, stable diffusion models. 5.5. Advanced Algorithms: Reinforcement learning, federated learning, transfer learning, few-shot learning, meta-learning 6. Programming & Frameworks: 6.1. Languages: Python (expert level), R, Julia, C++ for performance optimization. 6.2. ML Frameworks: TensorFlow, PyTorch, JAX, Hugging Face Transformers, OpenCV, NLTK, spaCy. 6.3. Scientific Computing: NumPy, SciPy, Pandas, Matplotlib, Seaborn, Plotly 6.4. Distributed Training: Horovod, DeepSpeed, FairScale, PyTorch Lightning 7. Model Development & Optimization: 7.1. Hyperparameter tuning using Optuna, Ray Tune, or Weights & Biases etc. 7.2. Model compression techniques (quantization, pruning, distillation). 7.3. ONNX model conversion and optimization. 8. Generative AI & NLP Applications: 8.1. Intelligence report analysis and summarization. 8.2. Multilingual radio communication translation. 8.3. Voice command systems for naval equipment. 8.4. Automated documentation and report generation. 8.5. Synthetic data generation for training simulations. 8.6. Scenario generation for naval training exercises. 8.7. Maritime intelligence synthesis and briefing generation. 9. Experience Requirements 9.1. Hands-on experience with at least 2 major AI domains. 9.2. Experience deploying models in production environments. 9.3. Contribution to open-source AI projects. 9.4. Led development of multiple end-to-end AI products. 9.5. Experience scaling AI solutions for large user bases. 9.6. Track record of optimizing models for real-time applications. 9.7. Experience mentoring technical teams 10. Product Development Skills 10.1. End-to-end ML pipeline development (data ingestion to model serving). 10.2. User feedback integration for model improvement. 10.3. Cross-platform model deployment (cloud, edge, mobile) 10.4. API design for ML model integration 11. Cross-Compatibility Requirements: 11.1. Define model interfaces (input/output schema) for frontend/backend use. 11.2. Build CLI and REST-compatible inference tools. 11.3. Maintain shared code libraries (Git) that backend/frontend teams can directly call. 11.4. Joint debugging and model-in-the-loop testing with UI and backend teams
Posted 1 week ago
15.0 years
0 Lacs
Thane, Maharashtra
On-site
We are looking for a Director of Engineering (AI Systems & Secure Platforms) to join our client's Core Engineering team at Thane (Maharashtra – India). The ideal candidate should have 12–15+ years of experience in architecting and deploying AI systems at scale, with deep expertise in agentic AI workflows, LLMs, RAG, Computer Vision, and secure mobile/wearable platforms. Top 3 Daily Tasks: Architect, optimize, and deploy LLMs, RAG pipelines, and Computer Vision models for smart glasses and other edge devices. Design and orchestrate agentic AI workflows—enabling autonomous agents with planning, tool usage, error handling, and closed feedback loops. Collaborate across AI, Firmware, Security, Mobile, Product, and Design teams to embed “invisible intelligence” within secure wearable systems. Must have 12–15+ years of experience in Applied AI, Deep Learning, Edge AI deployment, Secure Mobile Systems, and Agentic AI Architecture. Must have: Programming languages: Python, C/C++, Java (Android), Kotlin, JavaScript/Node.js, Swift, Objective-C, CUDA, Shell scripting Expert in TensorFlow, PyTorch, ONNX, HuggingFace; model optimization with TensorRT, TFLite Deep experience with LLMs, RAG pipelines, vector DBs (FAISS, Milvus) Proficient in agentic AI workflows—multi-agent orchestration, planning, feedback loops Strong in privacy-preserving AI (federated learning, differential privacy) Secure real-time comms (WebRTC, SIP, RTP) Nice to have: Experience with MCP or similar protocol frameworks Background in wearables/XR or smart glass AI platforms Expertise in platform security architectures (sandboxing, auditability)
Posted 1 week ago
8.0 - 13.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Senior Staff Engineer/Principal Engineer/Manager - AIML/Hardware Accelerators Job Area: Engineering Group, Engineering Group > Systems Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Systems Engineer, you will research, design, develop, simulate, and/or validate systems-level software, hardware, architecture, algorithms, and solutions that enables the development of cutting-edge technology. Qualcomm Systems Engineers collaborate across functional teams to meet and exceed system-level requirements and standards. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 8+ years of Systems Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 7+ years of Systems Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 6+ years of Systems Engineering or related work experience. Senior Staff/Principal Engineer/Manager Machine Learning We are looking for a Senior Staff/Principal AI/ML Engineer/Manager with expertise in model inference , optimization , debugging , and hardware acceleration . This role will focus on building efficient AI inference systems, debugging deep learning models, optimizing AI workloads for low latency, and accelerating deployment across diverse hardware platforms. In addition to hands-on engineering, this role involves cutting-edge research in efficient deep learning, model compression, quantization, and AI hardware-aware optimization techniques . You will explore and implement state-of-the-art AI acceleration methods while collaborating with researchers, industry experts, and open-source communities to push the boundaries of AI performance. This is an exciting opportunity for someone passionate about both applied AI development and AI research , with a strong focus on real-world deployment, model interpretability, and high-performance inference . Education & Experience: 17+ years of experience in AI/ML development, with at least 5 years in model inference, optimization, debugging, and Python-based AI deployment. Masters or Ph.D. in Computer Science, Machine Learning, AI Leadership & Collaboration Lead a team of AI engineers in Python-based AI inference development . Collaborate with ML researchers, software engineers, and DevOps teams to deploy optimized AI solutions. Define and enforce best practices for debugging and optimizing AI models Key Responsibilities Model Optimization & Quantization Optimize deep learning models using quantization (INT8, INT4, mixed precision etc), pruning, and knowledge distillation . Implement Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT) for deployment. Familiarity with TensorRT, ONNX Runtime, OpenVINO, TVM AI Hardware Acceleration & Deployment Optimize AI workloads for Qualcomm Hexagon DSP, GPUs (CUDA, Tensor Cores), TPUs, NPUs, FPGAs, Habana Gaudi, Apple Neural Engine . Leverage Python APIs for hardware-specific acceleration , including cuDNN, XLA, MLIR . Benchmark models on AI hardware architectures and debug performance issues AI Research & Innovation Conduct state-of-the-art research on AI inference efficiency, model compression, low-bit precision, sparse computing, and algorithmic acceleration . Explore new deep learning architectures (Sparse Transformers, Mixture of Experts, Flash Attention) for better inference performance . Contribute to open-source AI projects and publish findings in top-tier ML conferences (NeurIPS, ICML, CVPR). Collaborate with hardware vendors and AI research teams to optimize deep learning models for next-gen AI accelerators. Details of Expertise: Experience optimizing LLMs, LVMs, LMMs for inference Experience with deep learning frameworks : TensorFlow, PyTorch, JAX, ONNX. Advanced skills in model quantization, pruning, and compression . Proficiency in CUDA programming and Python GPU acceleration using cuPy, Numba, and TensorRT . Hands-on experience with ML inference runtimes (TensorRT, TVM, ONNX Runtime, OpenVINO) Experience working with RunTimes Delegates (TFLite, ONNX, Qualcomm) Strong expertise in Python programming , writing optimized and scalable AI code. Experience with debugging AI models , including examining computation graphs using Netron Viewer, TensorBoard, and ONNX Runtime Debugger . Strong debugging skills using profiling tools (PyTorch Profiler, TensorFlow Profiler, cProfile, Nsight Systems, perf, Py-Spy) . Expertise in cloud-based AI inference (AWS Inferentia, Azure ML, GCP AI Platform, Habana Gaudi). Knowledge of hardware-aware optimizations (oneDNN, XLA, cuDNN, ROCm, MLIR, SparseML). Contributions to open-source community Publications in International forums conferences journals
Posted 1 week ago
15.0 years
6 - 8 Lacs
Bengaluru
On-site
Company Description Bosch Global Software Technologies Private Limited is a 100% owned subsidiary of Robert Bosch GmbH, one of the world's leading global supplier of technology and services, offering end-to-end Engineering, IT and Business Solutions. With over 28,200+ associates, it’s the largest software development center of Bosch, outside Germany, indicating that it is the Technology Powerhouse of Bosch in India with a global footprint and presence in the US, Europe and the Asia Pacific region. Job Description Job Summary - Bosch Research is seeking a highly accomplished and technically authoritative Software Expert in AI/ML Architecture to define, evolve, and lead the technical foundations of enterprise-grade, AI-driven systems. This is a technical leadership role without people management responsibilities , intended for professionals with deep expertise in software architecture , AI/ML systems , and large-scale engineering applications and their end-to-end deliveries. You will own the architecture and technical delivery of complex software solutions—ensuring they are robust, scalable, and capable of serving diverse business domains and datasets. The ideal candidate demonstrates mastery in cloud-native engineering , MLOps , Azure ML , and the integration of AI Algorithms (Computer Vision, Text, Timeseries, ML, etc.), LLMs , Agentic AI , and other advanced AI capabilities into secure and high-performing software environments Roles & Responsibilities: Technical Architecture and Solution Ownership Define, evolve, and drive software architecture for AI-centric platforms across industrial and enterprise use cases. Architect for scalability, security, availability, and multi-domain adaptability , accommodating diverse data modalities and system constraints. Embed non-functional requirements (NFRs) —latency, throughput, fault tolerance, observability, security, and maintainability—into all architectural designs. Incorporate LLM , Agentic AI , and foundation model design patterns where appropriate, ensuring performance and operational compliance in real-world deployments. Enterprise Delivery and Vision Lead the translation of research and experimentation into production-grade solutions with measurable impact on business KPIs (both top-line growth and bottom-line efficiency). Perform deep-dive gap analysis in existing software and data pipelines and develop long-term architectural solutions and migration strategies. Build architectures that thrive under enterprise constraints , such as regulatory compliance, resource limits, multi-tenancy, and lifecycle governance. AI/ML Engineering and MLOps Design and implement scalable MLOps workflows , integrating CI/CD pipelines, experiment tracking, automated validation, and model retraining loops. Operationalize AI pipelines using Azure Machine Learning (Azure ML) services and ensure seamless collaboration with data science and platform teams. Ensure architectures accommodate responsible AI , model explainability, and observability layers. Software Quality and Engineering Discipline Champion software engineering best practices with rigorous attention to: Code quality through static/dynamic analysis and automated quality metrics Code reviews , pair programming, and technical design documentation Unit, integration, and system testing , backed by frameworks like pytest, unit test, or Robot Framework Code quality tools such as SonarQube, CodeQL, or similar Drive the culture of traceability, testability, and reliability , embedding quality gates into the development lifecycle. Own the technical validation lifecycle , ensuring reproducibility and continuous monitoring post-deployment. Cloud-Native AI Infrastructure Architect AI services with cloud-native principles , including microservices, containers, and service mesh. Leverage Azure ML , Kubernetes , Terraform , and cloud-specific SDKs for full lifecycle management. Ensure compatibility with hybrid-cloud/on-premise environments and support constraints typical of engineering and industrial domains Qualifications Educational qualification: Masterís or Ph.D. in Computer Science, AI/ML, Software-Engineering, or a related technical discipline Experience: 15+ years in software development, including: Deep experience in AI/ML-based software systems Strong architectural leadership in enterprise software design Delivery experience in engineering-heavy and data-rich environments Mandatory/requires Skills: Programming : Python (required), Java, JS, Frontend/Backend Technologies, Databases C++ (bonus) AI/ML : TensorFlow, PyTorch, ONNX, scikit-learn, MLFlow(equivalents) LLM/GenAI : Knowledge of transformers, attention mechanisms, fine-tuning, prompt engineering Agentic AI : Familiarity with planning frameworks, autonomous agents, and orchestration layers Cloud Platforms : Azure (preferred), AWS or GCP; experience with Azure ML Studio and SDKs Data & Pipelines : Airflow, Kafka, Spark, Delta Lake, Parquet, SQL/NoSQL Architecture : Microservices, event-driven design, API gateways, gRPC/REST, secure multi-tenancy DevOps/MLOps : GitOps, Jenkins, Azure DevOps, Terraform, containerization (Docker, Helm, K8s) What You Bring Proven ability to bridge research and engineering in the AI/ML space with strong architectural clarity. Ability to translate ambiguous requirements into scalable design patterns . Deep understanding of the enterprise SDLC óincluding review cycles, compliance, testing, and cross-functional alignment. A mindset focused on continuous improvement, metrics-driven development , and transparent technical decision-making. Additional Information Why Bosch Research? At Bosch Research, you will be empowered to lead the architectural blueprint of AI/ML software products that make a tangible difference in industrial innovation. You will have the autonomy to architect with vision, scale with quality, and deliver with rigor—while collaborating with a global community of experts in AI, engineering, and embedded systems.
Posted 1 week ago
4.0 years
4 - 7 Lacs
Bengaluru
On-site
Change the world. Love your job. Your career starts here! This is an exciting opportunity to design and develop innovative software solutions that drive TI's revolutionary product lines. We change lives by working on the technologies that people use every day. Are you ready for the challenge? As a Software Engineer, you'll become a key contributor, where your skills and input make a big difference. In this role, you'll design embedded software and development tools that will be used to test products. You'll write code that tells chips how to operate in revolutionary new ways. And, you'll work closely with business partners and customers, as well as TI's marketing, systems and applications engineering teams, to collaborate and solve business problems. QUALIFICATIONS Minimum Requirements: 4-8 years of relative experience Degree in Electrical Engineering, Computer Engineering, Computer Science, Electrical and Computer Engineering, or related field Strong Embedded firmware skills and experience Strong Assembly, C and C++ programming skills Preferred Qualifications: Knowledge of software engineering processes and the full software development lifecycle Demonstrated strong analytical and problem solving skills Strong verbal and written communication skills Ability to work in teams and collaborate effectively with people in different functions Strong time management skills that enable on-time project delivery Demonstrated ability to build strong, influential relationships Ability to work effectively in a fast-paced and rapidly changing environment Ability to take the initiative and drive for results Great programmer: Programming skills in C/C++ and python, Modular and Object Oriented programming skills, familiarity with build systems – make, cmake, familiarity with Linux In-depth knowledge of embedded systems – VLIW and SIMD processor architecture, DMA, cache, memory architecture, inter process communication Working experience in machine learning technologies such as CNN, transformers, quantization algorithms and approaches for camera-based applications on embedded systems Working experience with DSPs (preferably TI DSPs) and hardware development boards/EVM for image/vision-based processing algorithms Good knowledge on machine learning frameworks (PyTorch), inference solution and exchange formats (ONNX, ONNX RunTime, protobufs) Basic knowledge of RTOS and Linux with exposure to debugging of embedded systems - familiarity with heterogeneous core architecture is added advantage Well verse with software development life cycle and efficient use of associated tools – Git, JIRA, bitbucket, Jenkins, containers (Dockers), CI/CD Strong Communication, documentation and writing skills ABOUT US Why TI? Engineer your future. We empower our employees to truly own their career and development. Come collaborate with some of the smartest people in the world to shape the future of electronics. We're different by design. Diverse backgrounds and perspectives are what push innovation forward and what make TI stronger. We value each and every voice, and look forward to hearing yours. Meet the people of TI Benefits that benefit you. We offer competitive pay and benefits designed to help you and your family live your best life. Your well-being is important to us. About Texas Instruments Texas Instruments Incorporated (Nasdaq: TXN) is a global semiconductor company that designs, manufactures and sells analog and embedded processing chips for markets such as industrial, automotive, personal electronics, communications equipment and enterprise systems. At our core, we have a passion to create a better world by making electronics more affordable through semiconductors. This passion is alive today as each generation of innovation builds upon the last to make our technology more reliable, more affordable and lower power, making it possible for semiconductors to go into electronics everywhere. Learn more at TI.com. Texas Instruments is an equal opportunity employer and supports a diverse, inclusive work environment. If you are interested in this position, please apply to this requisition. TI does not make recruiting or hiring decisions based on citizenship, immigration status or national origin. However, if TI determines that information access or export control restrictions based upon applicable laws and regulations would prohibit you from working in this position without first obtaining an export license, TI expressly reserves the right not to seek such a license for you and either offer you a different position that does not require an export license or decline to move forward with your employment. JOB INFO Job Identification 25001868 Job Category Engineering - Product Dev Posting Date 07/20/2025, 04:49 AM Degree Level Bachelor's Degree Locations BAN4 2,3rd and 4th Floors, Bangalore, 560093, IN ECL/GTC Required Yes
Posted 1 week ago
12.0 years
0 Lacs
Thane, Maharashtra, India
On-site
We are looking for a Director of Engineering (AI Systems & Secure Platforms) to join our Core Engineering team at Thane (Maharashtra - India). The ideal candidate should have 12-15+ years of experience in architecting and deploying AI systems at scale, with deep expertise in agentic AI workflows , LLMs, RAG, Computer Vision, and secure mobile/wearable platforms. Join us to craft the next generation of smart eyewear—by leading intelligent, autonomous, real-time workflows that operate seamlessly at the edge. Read more here: The smartphone era is peaking. The next computing revolution is here. Top 3 Daily Tasks: Architect, optimize, and deploy LLMs, RAG pipelines, and Computer Vision models for smart glasses and other edge devices Design and orchestrate agentic AI workflows—enabling autonomous agents with planning, tool usage, error handling, and closed feedback loops Collaborate across AI, Firmware, Security, Mobile, Product, and Design teams to embed "invisible intelligence" within secure wearable systems Minimum Work Experience Required: 12-15+ years of experience in Applied AI, Deep Learning, Edge AI deployment, Secure Mobile Systems, and Agentic AI Architecture . Top 5 Skills You Should Possess: Expertise in TensorFlow, PyTorch, HuggingFace, ONNX, and optimization tools like TensorRT, TFLite Strong hands-on experience with LLMs, Retrieval-Augmented Generation (RAG), and Vector Databases (FAISS, Milvus) Deep understanding of Android/iOS integration, AOSP customization, and secure communication (WebRTC, SIP, RTP) Experience in Privacy-Preserving AI (Federated Learning, Differential Privacy) and secure AI APIs Proven track record in architecting and deploying agentic AI systems—multi-agent workflows, adaptive planning, tool chaining, and MCP (Model Context Protocol) Cross-Functional Collaboration Excellence: Partner with Platform & Security teams to define secure MCP server blueprints exposing device tools, sensors, and services with strong governance and traceability Coordinate with Mobile and AI teams to integrate agentic workflows across Android, iOS, and AOSP environments Work with Firmware and Product teams to define real-time sensor-agent interactions, secure data flows, and adaptive behavior in smart wearables What You'll Be Creating: Agentic, MCP-enabled pipelines for smart glasses—featuring intelligent agents for vision, context, planning, and secure execution Privacy-first AI systems combining edge compute, federated learning, and cloud integration A scalable, secure wearable AI platform that reflects our commitment to building purposeful and conscious technology Preferred Skills: Familiarity with secure real-time protocols: WebRTC, SIP, RTP Programming proficiency in C, C++, Java, Python, Swift, Kotlin, Objective-C, Node.js, Shell Scripting, CUDA (preferred) Experience designing AI platforms for wearables/XR with real-time and low-latency constraints Deep knowledge of MCP deployment patterns—secure token handling, audit trails, permission governance Proven leadership in managing cross-functional tech teams across AI, Firmware, Product, Mobile, and Security
Posted 1 week ago
3.0 years
0 Lacs
Greater Chennai Area
On-site
Job ID: 39582 Position Summary A rewarding career at HID Global beckons you! We are looking for an AI/ML Engineer , who is responsible for designing, developing, and deploying advanced AI/ML solutions to solve complex business challenges. This role requires expertise in machine learning, deep learning, MLOps, and AI model optimization , with a focus on building scalable, high-performance AI systems. As an AI/ML Engineer , you will work closely with data engineers, software developers, and business stakeholders to integrate AI-driven insights into real-world applications. You will be responsible for model development, system architecture, cloud deployment, and ensuring responsible AI adoption . We are a leading company in the trusted source for innovative HID Global Human Resources products, solutions and services that help millions of customers around the globe create, manage and use secure identities. Roles & Responsibilities: Design, develop, and deploy robust & scalable AI/ML models in Production environments. Collaborate with business stakeholders to identify AI/ML opportunities and define measurable success metrics. Design and build Retrieval-Augmented Generation (RAG) pipelines integrating vector stores, semantic search, and document parsing for domain-specific knowledge retrieval. Integrate Multimodal Conversational AI platforms (MCP) including voice, vision, and text to deliver rich user interactions. Drive innovation through PoCs, benchmarking, and experiments with emerging models and architectures. Optimize models for performance, latency and scalability. Build data pipelines and workflows to support model training and evaluation. Conduct research & experimentation on the state-of-the-art techniques (DL, NLP, Time series, CV) Partner with MLOps and DevOps teams to implement best practices in model monitoring, version and re-training. Lead code reviews, architecture discussions and mentor junior & peer engineers. Architect and implement end-to-end AI/ML pipelines, ensuring scalability and efficiency. Deploy models in cloud-based (AWS, Azure, GCP) or on-premises environments using tools like Docker, Kubernetes, TensorFlow Serving, or ONNX Ensure data integrity, quality, and preprocessing best practices for AI/ML model development. Ensure compliance with AI ethics guidelines, data privacy laws (GDPR, CCPA), and corporate AI governance. Work closely with data engineers, software developers, and domain experts to integrate AI into existing systems. Conduct AI/ML training sessions for internal teams to improve AI literacy within the organization. Strong analytical and problem solving mindset. Technical Requirements: Strong expertise in AI/ML engineering and software development. Strong experience with RAG architecture, vector databases Proficiency in Python and hands-on experience in using ML frameworks (tensorflow, pytorch, scikit-learn, xgboost etc) Familiarity with MCPs like Google Dialogflow, Rasa, Amazon Lex, or custom-built agents using LLM orchestration. Cloud-based AI/ML experience (AWS Sagemaker, Azure ML, GCP Vertex AI, etc.). Solid understanding of AI/ML life cycle – Data preprocessing, feature engineering, model selection, training, validation and deployment. Experience in production grade ML systems (Model serving, APIs, Pipelines) Familiarity with Data engineering tools (SPARK, Kafka, Airflow etc) Strong knowledge of statistical modeling, NLP, CV, Recommendation systems, Anomaly detection and time series forecasting. Hands-on in Software engineering with knowledge of version control, testing & CI/CD Hands-on experience in deploying ML models in production using Docker, Kubernetes, TensorFlow Serving, ONNX, and MLflow. Experience in MLOps & CI/CD for ML pipelines, including monitoring, retraining, and model drift detection. Proficiency in scaling AI solutions in cloud environments (AWS, Azure & GCP). Experience in data preprocessing, feature engineering, and dimensionality reduction. Exposure to Data privacy, Compliance and Secure ML practices Education and/or Experience: Graduation or master’s in computer science or information technology or AI/ML/Data science 3+ years of hands-on experience in AI/ML development/deployment and optimization Experience in leading AI/ML teams and mentoring junior engineers.
Posted 1 week ago
15.0 years
0 Lacs
Thane, Maharashtra, India
On-site
We are looking for a Director of Engineering (AI Systems & Secure Platforms) to join our client's Core Engineering team at Thane (Maharashtra – India). The ideal candidate should have 12–15+ years of experience in architecting and deploying AI systems at scale, with deep expertise in agentic AI workflows, LLMs, RAG, Computer Vision, and secure mobile/wearable platforms. Top 3 Daily Tasks: ● Architect, optimize, and deploy LLMs, RAG pipelines, and Computer Vision models for smart glasses and other edge devices. ● Design and orchestrate agentic AI workflows—enabling autonomous agents with planning, tool usage, error handling, and closed feedback loops. ● Collaborate across AI, Firmware, Security, Mobile, Product, and Design teams to embed “invisible intelligence” within secure wearable systems. Must have 12–15+ years of experience in Applied AI, Deep Learning, Edge AI deployment, Secure Mobile Systems, and Agentic AI Architecture. Must have: -Programming languages: Python, C/C++, Java (Android), Kotlin, JavaScript/Node.js, Swift, Objective-C, CUDA, Shell scripting -Expert in TensorFlow, PyTorch, ONNX, HuggingFace; model optimization with TensorRT, TFLite -Deep experience with LLMs, RAG pipelines, vector DBs (FAISS, Milvus) -Proficient in agentic AI workflows—multi-agent orchestration, planning, feedback loops -Strong in privacy-preserving AI (federated learning, differential privacy) -Secure real-time comms (WebRTC, SIP, RTP) Nice to have: -Experience with MCP or similar protocol frameworks -Background in wearables/XR or smart glass AI platforms -Expertise in platform security architectures (sandboxing, auditability)
Posted 1 week ago
15.0 years
0 Lacs
Thane, Maharashtra
On-site
We are looking for a Director of Engineering (AI Systems & Secure Platforms) to join our Core Engineering team at Thane (Maharashtra – India). The ideal candidate should have 12–15+ years of experience in architecting and deploying AI systems at scale, with deep expertise in agentic AI workflows, LLMs, RAG, Computer Vision, and secure mobile/wearable platforms. Join us to craft the next generation of smart eyewear—by leading intelligent, autonomous, real-time workflows that operate seamlessly at the edge. Read more here: The smartphone era is peaking. The next computing revolution is here. Top 3 Daily Tasks: Architect, optimize, and deploy LLMs, RAG pipelines, and Computer Vision models for smart glasses and other edge devices. Design and orchestrate agentic AI workflows—enabling autonomous agents with planning, tool usage, error handling, and closed feedback loops. Collaborate across AI, Firmware, Security, Mobile, Product, and Design teams to embed “invisible intelligence” within secure wearable systems. Minimum Work Experience Required: 12–15+ years of experience in Applied AI, Deep Learning, Edge AI deployment, Secure Mobile Systems, and Agentic AI Architecture. Top 5 Skills You Should Possess: Expertise in TensorFlow, PyTorch, HuggingFace, ONNX, and optimization tools like TensorRT, TFLite. Strong hands-on experience with LLMs, Retrieval-Augmented Generation (RAG), and Vector Databases (FAISS, Milvus). Deep understanding of Android/iOS integration, AOSP customization, and secure communication (WebRTC, SIP, RTP). Experience in Privacy-Preserving AI (Federated Learning, Differential Privacy) and secure AI APIs. Proven track record in architecting and deploying agentic AI systems—multi-agent workflows, adaptive planning, tool chaining, and MCP (Model Context Protocol). Cross-Functional Collaboration Excellence: Partner with Platform & Security teams to define secure MCP server blueprints exposing device tools, sensors, and services with strong governance and traceability. Coordinate with Mobile and AI teams to integrate agentic workflows across Android, iOS, and AOSP environments. Work with Firmware and Product teams to define real-time sensor-agent interactions, secure data flows, and adaptive behavior in smart wearables. What You’ll Be Creating: Agentic, MCP-enabled pipelines for smart glasses—featuring intelligent agents for vision, context, planning, and secure execution. Privacy-first AI systems combining edge compute, federated learning, and cloud integration. A scalable, secure wearable AI platform that reflects our commitment to building purposeful and conscious technology. Preferred Skills: Familiarity with secure real-time protocols: WebRTC, SIP, RTP. Programming proficiency in C, C++, Java, Python, Swift, Kotlin, Objective-C, Node.js, Shell Scripting, CUDA (preferred). Experience designing AI platforms for wearables/XR with real-time and low-latency constraints. Deep knowledge of MCP deployment patterns—secure token handling, audit trails, permission governance. Proven leadership in managing cross-functional tech teams across AI, Firmware, Product, Mobile, and Security
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Change the world. Love your job. Your career starts here! This is an exciting opportunity to design and develop innovative software solutions that drive TI's revolutionary product lines. We change lives by working on the technologies that people use every day. Are you ready for the challenge? As a Software Engineer, you'll become a key contributor, where your skills and input make a big difference. In this role, you'll design embedded software and development tools that will be used to test products. You'll write code that tells chips how to operate in revolutionary new ways. And, you'll work closely with business partners and customers, as well as TI's marketing, systems and applications engineering teams, to collaborate and solve business problems. Qualifications Minimum Requirements: 4-8 years of relative experience Degree in Electrical Engineering, Computer Engineering, Computer Science, Electrical and Computer Engineering, or related field Strong Embedded firmware skills and experience Strong Assembly, C and C++ programming skills Preferred Qualifications Knowledge of software engineering processes and the full software development lifecycle Demonstrated strong analytical and problem solving skills Strong verbal and written communication skills Ability to work in teams and collaborate effectively with people in different functions Strong time management skills that enable on-time project delivery Demonstrated ability to build strong, influential relationships Ability to work effectively in a fast-paced and rapidly changing environment Ability to take the initiative and drive for results Great programmer: Programming skills in C/C++ and python, Modular and Object Oriented programming skills, familiarity with build systems – make, cmake, familiarity with Linux In-depth knowledge of embedded systems – VLIW and SIMD processor architecture, DMA, cache, memory architecture, inter process communication Working experience in machine learning technologies such as CNN, transformers, quantization algorithms and approaches for camera-based applications on embedded systems Working experience with DSPs (preferably TI DSPs) and hardware development boards/EVM for image/vision-based processing algorithms Good knowledge on machine learning frameworks (PyTorch), inference solution and exchange formats (ONNX, ONNX RunTime, protobufs) Basic knowledge of RTOS and Linux with exposure to debugging of embedded systems - familiarity with heterogeneous core architecture is added advantage Well verse with software development life cycle and efficient use of associated tools – Git, JIRA, bitbucket, Jenkins, containers (Dockers), CI/CD Strong Communication, documentation and writing skills About Us Why TI? Engineer your future. We empower our employees to truly own their career and development. Come collaborate with some of the smartest people in the world to shape the future of electronics. We're different by design. Diverse backgrounds and perspectives are what push innovation forward and what make TI stronger. We value each and every voice, and look forward to hearing yours. Meet the people of TI Benefits that benefit you. We offer competitive pay and benefits designed to help you and your family live your best life. Your well-being is important to us. About Texas Instruments Texas Instruments Incorporated (Nasdaq: TXN) is a global semiconductor company that designs, manufactures and sells analog and embedded processing chips for markets such as industrial, automotive, personal electronics, communications equipment and enterprise systems. At our core, we have a passion to create a better world by making electronics more affordable through semiconductors. This passion is alive today as each generation of innovation builds upon the last to make our technology more reliable, more affordable and lower power, making it possible for semiconductors to go into electronics everywhere. Learn more at TI.com . Texas Instruments is an equal opportunity employer and supports a diverse, inclusive work environment. If you are interested in this position, please apply to this requisition. About The Team TI does not make recruiting or hiring decisions based on citizenship, immigration status or national origin. However, if TI determines that information access or export control restrictions based upon applicable laws and regulations would prohibit you from working in this position without first obtaining an export license, TI expressly reserves the right not to seek such a license for you and either offer you a different position that does not require an export license or decline to move forward with your employment.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough