Jobs
Interviews

2024 Inference Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description Designation and SRF Name- Technical CX Consultant role Role- Permanent/ Full time Panel and Hiring Manager- Sanjay Pathak Experience- 8-12 years relevant experience Location- Noida/ Gurgaon/ Pune/ Bangalore Shift- 12PM to 10PM (10 Hours Shift. Also depends on the project/work dependencies) Working Days- 5 days Work Mode- Hybrid Job Description: Highly skilled CX Consulting with deep expertise in CCaasS, Integrations, IVR, Natural Language Processing (NLP), Language Models, and scalable cloud-based solution deployment. Skills Technical Expertise: Having a deep understanding of Conversational AI, Smart Agent Assist, CCaaS and their technical capabilities. Stay current with industry trends, emerging technologies, and competitor offerings. Customer Engagement: Engage with prospective clients to understand their technical requirements and business challenges. Conduct needs assessments and provide tailored technical solutions. Solution Demonstrations: Deliver compelling product demonstrations that showcase the features and benefits of our solutions. Customize demonstrations to align with the specific needs and use cases of potential customers. Strong NLP and Language Model fundamentals (e.g., transformer architectures, embeddings, tokenization, fine-tuning). Expert in Python, with clean, modular, and scalable coding practices. Experience developing and deploying solutions on Azure, AWS, or Google Cloud Platform. Familiarity with Vertex AI, including Model Registry, Pipelines, and RAG integrations (preferred). Experience with PyTorch, including model training, evaluation, and serving. Knowledge of GPU-based inferencing (e.g., ONNX, Torch Script, Triton Inference Server). Understanding of ML lifecycle management, including MLOps best practices. Experience with containerization (Docker) and orchestration tools (e.g., Kubernetes). Exposure to REST APIs, gRPC, and real-time data pipelines is a plus. Degree in Computer Science, Mathematics, Computational Linguistics, AI, ML or similar field. PhD is a plus. Responsibilities Consulting and design end-to-end AI solutions for CX. Consulting engagement of scalable AI services on cloud infrastructure (Azure/AWS/GCP). Collaborate with engineering, product, and data teams to define AI-driven features and solutions. Optimize model performance, scalability, and cost across CPU and GPU environments. Ensure reliable model serving with a focus on low-latency, high-throughput inferencing. Keep abreast of the latest advancements in NLP, LLMs, and AI infrastructure.

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

About Bipolar Factory At Bipolar Factory, we are reimagining the boundaries of technology and creativity through cutting-edge AI solutions. Our team is building tools and products powered by real-time intelligence, and we’re looking for a skilled Computer Vision Engineer to join us in shaping the future of visual AI. If you’re excited by deep learning, edge AI, real-time detection, and large-scale vision pipelines, this is your place. Key Responsibilities Model Development : Design, train, and deploy computer vision models for tasks such as object detection, image segmentation, classification, and tracking. Pipeline Building : Build scalable, efficient, and production-ready vision pipelines using deep learning frameworks. Experimentation : Run experiments with state-of-the-art architectures (e.g., YOLO, Faster R-CNN, SAM, Vision Transformers), fine-tune pre-trained models, and benchmark performance. Data Handling : Work with large datasets—curate, augment, clean, and label image/video data for training and validation. Collaboration : Work closely with the AI team, product managers, and backend developers to integrate vision models into production systems. Research-Driven Engineering : Stay current with research trends and bring academic advancements into practical use cases. Optimization : Optimize models for real-time inference, edge devices, or low-resource environments. What We’re Looking For 3–5 years of experience working in computer vision or AI engineering roles. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow. Solid understanding of image processing, CNNs, and modern computer vision techniques. Experience with OpenCV and vision-based data preprocessing. Familiarity with model deployment frameworks (e.g., ONNX, TensorRT, FastAPI). Ability to write clean, modular, and well-documented code. Strong analytical skills and a mindset for experimentation. Nice to Have Experience with video analytics, real-time processing, or edge deployment (e.g., Jetson, Raspberry Pi). Familiarity with generative models (GANs, Diffusion models). Contributions to open-source CV/ML projects or research publications. Experience with cloud-based training or deployment (AWS, GCP). Why Join Bipolar Factory? Work on high-impact, experimental AI products from the ground up A fast-moving, low-hierarchy environment where your work is seen and valued Flexible schedules, creative freedom, and deep tech problems to solve A passionate team that thrives on innovation and iteration

Posted 1 week ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We are seeking an experienced Devops/ AIOps Architect to design, architect, and implement an AI-driven operations solution that integrates various cloud-native services across AWS, Azure, and cloud-agnostic environments. The AIOps platform will be used for end-to-end machine learning lifecycle management, automated incident detection, and root cause analysis (RCA). The architect will lead efforts in developing a scalable solution utilizing data lakes, event streaming pipelines, ChatOps integration, and model deployment services. This platform will enable real-time intelligent operations in hybrid cloud and multi-cloud setups. Responsibilities Assist in the implementation and maintenance of cloud infrastructure and services Contribute to the development and deployment of automation tools for cloud operations Participate in monitoring and optimizing cloud resources using AIOps and MLOps techniques Collaborate with cross-functional teams to troubleshoot and resolve cloud infrastructure issues Support the design and implementation of scalable and reliable cloud architectures Conduct research and evaluation of new cloud technologies and tools Work on continuous improvement initiatives to enhance cloud operations efficiency and performance Document cloud infrastructure configurations, processes, and procedures Adhere to security best practices and compliance requirements in cloud operations Requirements Bachelor’s Degree in Computer Science, Engineering, or related field 12+ years of experience in DevOps roles, AIOps, OR Cloud Architecture Hands-on experience with AWS services such as SageMaker, S3, Glue, Kinesis, ECS, EKS Strong experience with Azure services such as Azure Machine Learning, Blob Storage, Azure Event Hubs, Azure AKS Strong experience with Infrastructure as Code (IAC)/ Terraform/ Cloud formation Proficiency in container orchestration (e.g., Kubernetes) and experience with multi-cloud environments Experience with machine learning model training, deployment, and data management across cloud-native and cloud-agnostic environments Expertise in implementing ChatOps solutions using platforms like Microsoft Teams, Slack, and integrating them with AIOps automation Familiarity with data lake architectures, data pipelines, and inference pipelines using event-driven architectures Strong programming skills in Python for rule management, automation, and integration with cloud services Nice to have Any certifications in the AI/ ML/ Gen AI space

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

Remote

Full Stack Developer | Node.js | TypeScript | Azure AKS | AI Integrations We’re seeking a backend-heavy Full Stack Developer with deep experience in Node.js, TypeScript, Azure AKS, and AI integrations to join a high-impact platform team. This is a remote contractual opportunity, focused on building intelligent backend systems, managing cloud-native infrastructure, and enabling real-time AI workflows at scale. Experience: 6+ years Salary: Competitive and based on experience and skill Expected Notice Period: 2 weeks Shift: Minimum 5 hours overlap with UK Time Zone (GMT) Opportunity Type: Remote Placement Type: Contractual What do you need for this opportunity? Primary Skills: Backend: Node.js, TypeScript, MongoDB, NATS (event-driven architecture), scalable APIs, microservices Cloud & Infrastructure: Azure AKS (Kubernetes), Azure VMs, ingress controllers, persistent volumes, secret/config management AI Integrations: Integrating LLM APIs, model orchestration, inference pipelines, system-level AI workflows DevOps: GitHub Actions, DockerHub / Azure Container Registry (ACR), CI/CD pipelines, secure container deployment Version Control: Git Frontend: Angular (SPA development ) Nice to Have Hands-on experience with automated testing frameworks (e.g., Jest, Mocha, Cypress) Familiarity with ML Ops tools, inference pipelines, or vector databases Knowledge of Azure Identity, access control, and platform security best practices Exposure to stream processing and real-time data flows About the Role You’ll work with Magic Factory’s UK-based AI platform client, taking lead on backend services and Azure infrastructure. Your role involves: Designing and building scalable, AI-integrated microservices Developing intelligent backends that consume and orchestrate AI models Managing full-service lifecycle within Azure AKS, including workloads, ingress, volumes, and secrets Creating resilient, event-driven systems via NATS or similar messaging queues Maintaining robust CI/CD pipelines with GitHub Actions Implementing testing and monitoring frameworks for reliability and scale Role & Responsibilities While this is a full-stack role, your day-to-day responsibilities will be backend and infrastructure-centric, such as: Backend development with Node.js/TypeScript for scalable microservices AI integration into backend workflows and real-time services Managing AKS workloads, deployments, volumes, and observability Architecting cloud-native, event-driven infrastructure using Azure best practices The Ideal Candidate Will Take end-to-end ownership of backend releases from development to production Understand and implement AI service orchestration and distributed model execution Be comfortable managing cloud-native deployments and scalable APIs Champion clean code, automation, and test-driven development principles Benefits 100% Remote: Work from anywhere with global team collaboration Flexible Hours: Align your schedule with a 5-hour overlap in GMT Growth-Oriented: Thrive in a fast-paced, learning-focused startup culture Impactful Work: Build AI-powered infrastructure that drives real-world value Modern Stack: Deep Azure + AI + event-driven backend tech Ownership: Influence architecture and drive core backend development Qualification Bachelor’s Degree in Computer Science, Software Engineering, or related field 6+ years of experience as a backend/full stack developer, with strong backend orientation Proven expertise in Node.js, TypeScript, MongoDB, and building scalable APIs Significant hands-on experience with Azure AKS, Azure infrastructure, and container orchestration Strong understanding of event-driven architecture, CI/CD with GitHub Actions, and container security Experience with AI or ML model integration, workflow management, and inferencing Bonus: Familiarity with LLM pipelines, vector DBs, testing frameworks, or Azure ML About Magic Factory Magic Factory is a start-up for start-ups, enabling world-class funded start-ups to accelerate their product development by 2X. We partner with cutting-edge start-ups across the globe and help them augment their product development teams with world-class remote developers. We are a start-up in the true sense of the word, built by passionate entrepreneurs and entrepreneurial engineers who have a passion for start-ups. Come join us and work on solving real-world problems with a talented, passionate, and global team. Get exposure to best-in-class technologies and accelerate your learning curve.

Posted 1 week ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking an experienced Devops/ AIOps Architect to design, architect, and implement an AI-driven operations solution that integrates various cloud-native services across AWS, Azure, and cloud-agnostic environments. The AIOps platform will be used for end-to-end machine learning lifecycle management, automated incident detection, and root cause analysis (RCA). The architect will lead efforts in developing a scalable solution utilizing data lakes, event streaming pipelines, ChatOps integration, and model deployment services. This platform will enable real-time intelligent operations in hybrid cloud and multi-cloud setups. Responsibilities Assist in the implementation and maintenance of cloud infrastructure and services Contribute to the development and deployment of automation tools for cloud operations Participate in monitoring and optimizing cloud resources using AIOps and MLOps techniques Collaborate with cross-functional teams to troubleshoot and resolve cloud infrastructure issues Support the design and implementation of scalable and reliable cloud architectures Conduct research and evaluation of new cloud technologies and tools Work on continuous improvement initiatives to enhance cloud operations efficiency and performance Document cloud infrastructure configurations, processes, and procedures Adhere to security best practices and compliance requirements in cloud operations Requirements Bachelor’s Degree in Computer Science, Engineering, or related field 12+ years of experience in DevOps roles, AIOps, OR Cloud Architecture Hands-on experience with AWS services such as SageMaker, S3, Glue, Kinesis, ECS, EKS Strong experience with Azure services such as Azure Machine Learning, Blob Storage, Azure Event Hubs, Azure AKS Strong experience with Infrastructure as Code (IAC)/ Terraform/ Cloud formation Proficiency in container orchestration (e.g., Kubernetes) and experience with multi-cloud environments Experience with machine learning model training, deployment, and data management across cloud-native and cloud-agnostic environments Expertise in implementing ChatOps solutions using platforms like Microsoft Teams, Slack, and integrating them with AIOps automation Familiarity with data lake architectures, data pipelines, and inference pipelines using event-driven architectures Strong programming skills in Python for rule management, automation, and integration with cloud services Nice to have Any certifications in the AI/ ML/ Gen AI space

Posted 1 week ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are seeking an experienced Devops/ AIOps Architect to design, architect, and implement an AI-driven operations solution that integrates various cloud-native services across AWS, Azure, and cloud-agnostic environments. The AIOps platform will be used for end-to-end machine learning lifecycle management, automated incident detection, and root cause analysis (RCA). The architect will lead efforts in developing a scalable solution utilizing data lakes, event streaming pipelines, ChatOps integration, and model deployment services. This platform will enable real-time intelligent operations in hybrid cloud and multi-cloud setups. Responsibilities Assist in the implementation and maintenance of cloud infrastructure and services Contribute to the development and deployment of automation tools for cloud operations Participate in monitoring and optimizing cloud resources using AIOps and MLOps techniques Collaborate with cross-functional teams to troubleshoot and resolve cloud infrastructure issues Support the design and implementation of scalable and reliable cloud architectures Conduct research and evaluation of new cloud technologies and tools Work on continuous improvement initiatives to enhance cloud operations efficiency and performance Document cloud infrastructure configurations, processes, and procedures Adhere to security best practices and compliance requirements in cloud operations Requirements Bachelor’s Degree in Computer Science, Engineering, or related field 12+ years of experience in DevOps roles, AIOps, OR Cloud Architecture Hands-on experience with AWS services such as SageMaker, S3, Glue, Kinesis, ECS, EKS Strong experience with Azure services such as Azure Machine Learning, Blob Storage, Azure Event Hubs, Azure AKS Strong experience with Infrastructure as Code (IAC)/ Terraform/ Cloud formation Proficiency in container orchestration (e.g., Kubernetes) and experience with multi-cloud environments Experience with machine learning model training, deployment, and data management across cloud-native and cloud-agnostic environments Expertise in implementing ChatOps solutions using platforms like Microsoft Teams, Slack, and integrating them with AIOps automation Familiarity with data lake architectures, data pipelines, and inference pipelines using event-driven architectures Strong programming skills in Python for rule management, automation, and integration with cloud services Nice to have Any certifications in the AI/ ML/ Gen AI space

Posted 1 week ago

Apply

6.0 years

0 Lacs

India

Remote

Full Stack Developer | Node.js | TypeScript | Azure AKS | AI Integrations We’re seeking a backend-heavy Full Stack Developer with deep experience in Node.js, TypeScript, Azure AKS, and AI integrations to join a high-impact platform team. This is a remote contractual opportunity, focused on building intelligent backend systems, managing cloud-native infrastructure, and enabling real-time AI workflows at scale. Experience : 6+ years Salary : Competitive and based on experience and skill Expected Notice Period : 2 weeks Shift : Minimum 5 hours overlap with UK Time Zone (GMT) Opportunity Type : Remote Placement Type : Contractual 🔧 What do you need for this opportunity? Primary Skills : Backend : Node.js, TypeScript, MongoDB, NATS (event-driven architecture), scalable APIs, microservices Cloud & Infrastructure : Azure AKS (Kubernetes), Azure VMs, ingress controllers, persistent volumes, secret/config management AI Integrations : Integrating LLM APIs, model orchestration, inference pipelines, system-level AI workflows DevOps : GitHub Actions, DockerHub / Azure Container Registry (ACR), CI/CD pipelines, secure container deployment Version Control : Git Frontend : Angular (SPA development ) Nice to Have Hands-on experience with automated testing frameworks (e.g., Jest, Mocha, Cypress) Familiarity with ML Ops tools, inference pipelines, or vector databases Knowledge of Azure Identity, access control, and platform security best practices Exposure to stream processing and real-time data flows About the Role You’ll work with Magic Factory’s UK-based AI platform client, taking lead on backend services and Azure infrastructure. Your role involves: Designing and building scalable, AI-integrated microservices Developing intelligent backends that consume and orchestrate AI models Managing full-service lifecycle within Azure AKS, including workloads, ingress, volumes, and secrets Creating resilient, event-driven systems via NATS or similar messaging queues Maintaining robust CI/CD pipelines with GitHub Actions Implementing testing and monitoring frameworks for reliability and scale Role & Responsibilities While this is a full-stack role, your day-to-day responsibilities will be backend and infrastructure-centric, such as: Backend development with Node.js/TypeScript for scalable microservices AI integration into backend workflows and real-time services Managing AKS workloads, deployments, volumes, and observability Architecting cloud-native, event-driven infrastructure using Azure best practices The Ideal Candidate Will Take end-to-end ownership of backend releases from development to production Understand and implement AI service orchestration and distributed model execution Be comfortable managing cloud-native deployments and scalable APIs Champion clean code, automation, and test-driven development principles Benefits 100% Remote: Work from anywhere with global team collaboration Flexible Hours: Align your schedule with a 5-hour overlap in GMT Growth-Oriented: Thrive in a fast-paced, learning-focused startup culture Impactful Work: Build AI-powered infrastructure that drives real-world value Modern Stack: Deep Azure + AI + event-driven backend tech Ownership: Influence architecture and drive core backend development Qualification Bachelor’s Degree in Computer Science, Software Engineering, or related field 6+ years of experience as a backend/full stack developer, with strong backend orientation Proven expertise in Node.js, TypeScript, MongoDB, and building scalable APIs Significant hands-on experience with Azure AKS, Azure infrastructure, and container orchestration Strong understanding of event-driven architecture, CI/CD with GitHub Actions, and container security Experience with AI or ML model integration, workflow management, and inferencing Bonus: Familiarity with LLM pipelines, vector DBs, testing frameworks, or Azure ML About Magic Factory Magic Factory is a start-up for start-ups, enabling world-class funded start-ups to accelerate their product development by 2X. We partner with cutting-edge start-ups across the globe and help them augment their product development teams with world-class remote developers. We are a start-up in the true sense of the word, built by passionate entrepreneurs and entrepreneurial engineers who have a passion for start-ups. Come join us and work on solving real-world problems with a talented, passionate, and global team. Get exposure to best-in-class technologies and accelerate your learning curve. .

Posted 1 week ago

Apply

4.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description Nielsen is seeking an organized, detail-oriented, team player, to join the Engineering team in the role of Machine learning Engineer. Nielsen's Audience Measurement Engineering platforms support the measurement of television viewing in more than 30 countries around the world. The Software Engineer will be responsible for defining, developing, testing, analyze, and delivering technology solutions within Nielsen's Collections platforms. Qualifications Experience having led multiple projects leveraging LLMs, GenAI, and Prompt Engineering. Exposure to real-world MLOps deploying models into production adding features to products. Knowledge of working in a cloud environment. Strong understanding of LLMs, GenAI, Prompt Engineering, and Copilot. Bachelor's degree in Computer Science or equivalent degree. 4+ years of software experience Experience with Machine learning frameworks and models. The ML Engineer is expected to fully own the services that are built with the ML Scientists. This cuts across scalability, availability, having the metrics in place, alarms/alerts in place - and being responsible for the latency of the services. Data quality checks & onboarding the data onto the cloud for modeling purposes. Prompt Engineering, FT work, Evaluation, Data. End-end AI Solution architecture, latency tradeoffs, LLM Inference Optimization, Control Plane, Data Plate, and Platform Engineering. Comfort in Python and Java is highly desirable. Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 1 week ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Company Description FormantAI pioneers cutting-edge, research-based AI solutions to drive business transformation. We translate complex AI research into powerful, practical tools that deliver tangible results for organizations. At FormantAI, we responsibly deploy advanced AI, aligning our innovative capabilities with your specific business needs to ensure impactful outcomes. Partner with us to leverage the forefront of AI innovation and redefine success in your industry. About the job Position: GenAI Engineering Intern Location: Remote Duration: 2–6 Months Stipend: 50,000 - 60,000 Start Date: Immediate Role Description This is a paid remote internship role for a GenAI Engineering Intern. The GenAI Engineering Intern will assist in developing and enhancing AI models, performing data analysis, contributing to research projects, and writing code. Daily tasks include collaborating with the engineering team to implement AI solutions, running experiments, and documenting findings. This role provides an opportunity to work in a dynamic research environment, gaining hands-on experience with AI technologies. Qualifications Strong programming skills in Python, C++, or Java Experience with machine learning frameworks like TensorFlow, PyTorch Basic understanding of LLMs (GPT, Claude, Llama) and Natural Language Processing (NLP) Interest in optimizing AI inference, query processing, and API integrations Exposure to machine learning frameworks like Hugging Face or LangChain is a plus Strong communication skills and ability to work collaboratively Enthusiasm for learning and applying new technologies

Posted 1 week ago

Apply

4.0 years

4 - 9 Lacs

Gurgaon

On-site

Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description Nielsen is seeking an organized, detail-oriented, team player, to join the Engineering team in the role of Machine learning Engineer. Nielsen's Audience Measurement Engineering platforms support the measurement of television viewing in more than 30 countries around the world. The Software Engineer will be responsible for defining, developing, testing, analyze, and delivering technology solutions within Nielsen's Collections platforms. Qualifications Experience having led multiple projects leveraging LLMs, GenAI, and Prompt Engineering. Exposure to real-world MLOps deploying models into production adding features to products. Knowledge of working in a cloud environment. Strong understanding of LLMs, GenAI, Prompt Engineering, and Copilot. Bachelor's degree in Computer Science or equivalent degree. 4+ years of software experience Experience with Machine learning frameworks and models. The ML Engineer is expected to fully own the services that are built with the ML Scientists. This cuts across scalability, availability, having the metrics in place, alarms/alerts in place - and being responsible for the latency of the services. Data quality checks & onboarding the data onto the cloud for modeling purposes. Prompt Engineering, FT work, Evaluation, Data. End-end AI Solution architecture, latency tradeoffs, LLM Inference Optimization, Control Plane, Data Plate, and Platform Engineering. Comfort in Python and Java is highly desirable. Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 1 week ago

Apply

11.0 years

0 Lacs

Hyderābād

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* The Data Analytics Strategy platform and decision tool team is responsible for Data strategy for entire CSWT and development of platforms which supports the Data Strategy. Data Science platform, Graph Data Platform, Enterprise Events Hub are key platforms of Data Platform initiative. Job Description* We're seeking a highly skilled AI/ML Platform Engineer to architect and build a modern, scalable, and secure Data Science and Analytical Platform. This pivotal role will drive end-to-end (E2E) model lifecycle management, establish robust platform governance, and create the foundational infrastructure for developing, deploying, and managing Machine Learning models across both on-premise and hybrid cloud environments. Responsibilities* Lead the architecture and design for building scalable, resilient, and secure distributed applications ensuring compliance with organizational technology guidelines, security standards, and industry best practices like 12-factor principles and well-architected framework guidelines. Actively contribute to hands-on coding, building core components, APIs and microservices while ensuring high code quality, maintainability, and performance. Ensure adherence to engineering excellence standards and compliance with key organizational metrics such as code quality, test coverage and defect rates. Integrate secure development practices, including data encryption, secure authentication, and vulnerability management into the application lifecycle. Work on adopting and aligning development practices with CI/CD best practices to enable efficient build and deployment of the application on the target platforms like VMs and/or Container orchestration platforms like Kubernetes, OpenShift etc. Collaborate with stakeholders to align technical solutions business requirements, driving informed decision-making and effective communication across teams. Mentor team members, advocate best practices, and promote a culture if continuous improvement and innovation in engineering processes. Develop efficient utilities, automation frameworks, data science platforms that can be utilized across multiple Data Science teams. Propose/Build variety of efficient Data pipelines to support the ML Model building & deployment. Propose/Build automated deployment pipelines to enable self-help continuous deployment process for the Data Science teams. Analyze, understand, execute and resolve the issues in user scripts / model / code. Perform release and upgrade activities as required. Well versed in the open-source technology and aware of emerging 3rd party technology & tools in AI-ML space. Ability to fire fight, propose fix, guide the team towards day-to-day issues in production. Ability to train partner Data Science teams on frameworks and platform. Flexible with time and shift to support the project requirements. It doesn’t include any night shift. This position doesn’t include any L1 or L2 (first line of support) responsibility. Requirements* Education* Graduation / Post Graduation: BE/B.Tech/MCA/MTech Certifications If Any: FullStack Bigdata Experience Range* 11+ Years Foundational Skills* Microservices & API Development: Strong proficiency in Python, building performant microservices and REST APIs using frameworks like FastAPI and Flask . API Gateway & Security: Hands-on experience with API gateway technologies like Apache APISIX (or similar, e.g., Kong, Envoy) for managing and securing API traffic, including JWT/OAuth2 based authentication . Observability & Monitoring: Proven ability to monitor, log, and troubleshoot model APIs and platform services using tools such as Prometheus, Grafana , or the ELK/EFK stack . Policy & Governance: Proficiency with Open Policy Agent (OPA) or similar policy-as-code frameworks for implementing and enforcing governance policies. MLOps Expertise: Solid understanding of MLOps capabilities , including ML model versioning, registry, and lifecycle automation using tools like MLflow, Kubeflow , or custom metadata solutions. Multi-Tenancy: Experience designing and implementing multi-tenant architectures for shared model and data infrastructure. Containerization & Orchestration: Strong knowledge of Docker and Kubernetes for containerization and orchestration. CI/CD & GitOps: Familiarity with CI/CD tools and GitOps practices for automated deployments and infrastructure management. Hybrid Cloud Deployments: Understanding of hybrid deployment strategies across on-premise virtual machines and public cloud platforms ( AWS, Azure, GCP ). Data science workbench understanding: Basic understanding of the requirements for data science workloads (Distributed training frameworks like Apache Spark, Dash, and IDE’s like Jupyter notebooks abd VScode) Desired Skills* Security Architecture: Understanding of zero-trust security architecture and secure API design patterns. Model Serving Frameworks: Knowledge of specialized model serving frameworks like Triton Inference Server . Vector Databases: Familiarity with Vector databases (e.g., Redis, Qdrant) and embedding stores. Data Lineage & Metadata: Exposure to data lineage and metadata management using tools like DataHub or OpenMetadata Codes solutions and unit test to deliver a requirement/story per the defined acceptance criteria and compliance requirements. Utilizes multiple architectural components (across data, application, business) in design and development of client requirements. Performs Continuous Integration and Continuous Development (CI-CD) activities. Contributes to story refinement and definition of requirements. Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle. Extensive hands on supporting platforms to allow modelling and analysts go through the complete model lifecycle management (data munging, model develop/train, governance, deployment) Experience with model deployment, scoring and monitoring for batch and real-time on various different technologies and platforms. Experience in Hadoop cluster and integration includes ETL, streaming and API styles of integration. Experience in automation for deployment using Ansible Playbooks, scripting. Experience with developing and building RESTful API services in an efficient and scalable manner. Design and build and deploy streaming and batch data pipelines capable of processing and storing large datasets quickly and reliably using Kafka, Spark and YARN for large volumes of data (TBs) Experience designing and building full stack solutions utilizing distributed computing or multi-node architecture for large datasets (terabytes to petabyte scale) Experience with processing and deployment technologies such YARN, Kubernetes /Containers and Serverless Compute for model development and training Hands on experience working in a Cloud Platform (AWS/Azure/GCP) to support the Data Science Effective communication, Strong stakeholder engagement skills, Proven ability in leading and mentoring a team of software engineers in a dynamic environment. Work Timings* 11:30 AM to 8:30 PM IST Job Location* Hyderabad

Posted 1 week ago

Apply

0 years

5 - 8 Lacs

Hyderābād

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Vice President– Generative AI – Systems Architect Role Overview: We are looking for an experienced Systems Architect with extensive experience in designing and scaling Generative AI systems to production. This role requires an individual with deep expertise in system architecture, software engineering, data platforms, and AI infrastructure, who can bridge the gap between data science, engineering and business. You will be responsible for end-to-end architecture of Gen.AI systems including model lifecycle management, inference, orchestration, pipelines Key Responsibilities: Architect and design end-to-end systems for production-grade Generative AI applications (e.g., LLM-based chatbots, copilots, content generation tools). Define and oversee system architecture covering data ingestion, model training/fine-tuning, inferencing, and deployment pipelines. Establish architectural tenets like modularity, scalability, reliability, observability, and maintainability. Collaborate with data scientists, ML engineers, platform engineers, and product managers to align architecture with business and AI goals. Choose and integrate foundation models (open source or proprietary) using APIs, model hubs, or fine-tuned versions. Evaluate and design solutions based on architecture patterns such as Retrieval-Augmented Generation (RAG), Agentic AI, Multi-modal AI, and Federated Learning. Design secure and compliant architecture for enterprise settings, including data governance, auditability, and access control. Lead system design reviews and define non-functional requirements (NFRs), including latency, availability, throughput, and cost. Work closely with MLOps teams to define the CI /CD processes for model and system updates. Contribute to the creation of reference architectures, design templates, and reusable components. Stay abreast of the latest advancements in GenAI , system design patterns, and AI platform tooling. Qualifications we seek in you! Minimum Qualifications Proven experience designing and implementing distributed systems, cloud-native architectures, and microservices. Deep understanding of Generative AI architectures, including LLMs, diffusion models, prompt engineering, and model fine-tuning. Strong experience with at least one cloud platform (AWS, GCP, or Azure) and services like SageMaker, Vertex AI, or Azure ML. Experience with Agentic AI systems or orchestrating multiple LLM agents. Experience with multimodal systems (e.g., combining image, text, video, and speech models). Knowledge of semantic search, vector databases, and retrieval techniques in RAG. Familiarity with Zero Trust architecture and advanced enterprise security practices. Experience in building developer platforms/toolkits for AI consumption. Contributions to open-source AI system frameworks or thought leadership in GenAI architecture. Hands-on experience with tools and frameworks like LangChain , Hugging Face, Ray, Kubeflow, MLflow , or Weaviate/FAISS. Knowledge of data pipelines, ETL/ELT, and data lakes/warehouses (e.g., Snowflake, BigQuery , Delta Lake). Solid grasp of DevOps and MLOps principles, including containerization (Docker), orchestration (Kubernetes), CI/CD pipelines, and model monitoring. Familiarity with system design tradeoffs in latency vs cost vs scale for GenAI workloads. Preferred Qualifications: Bachelor’s or Master’s degree in computer science, Engineering, or related field. Experience in software/system architecture, with experience in GenAI /AI/ML. Proven experience designing and implementing distributed systems, cloud-native architectures, and microservices. Strong interpersonal and communication skills; ability to collaborate and present to technical and executive stakeholders. Certifications in cloud platforms (e.g., AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect Expert, Google Cloud Professional Data Engineer). Familiarity with data governance and security best practices. Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Vice President Primary Location India-Hyderabad Schedule Full-time Education Level Master's / Equivalent Job Posting Jul 24, 2025, 4:29:40 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Purpose: Define the AI Research Team structure, vision, hiring plan, and guidelines to build a world-class AI foundation for HDFC Mobile & Net Banking. Job Responsibilities: Responsible for deploying ML models to production, setting up Smart SOC automation, Firebase/Play monitoring, crash-free sessions, and containerized AI infra. Develop NLP systems to parse and understand statements in Hindi, Tamil, Marathi, Bengali and convert into insights using custom LLMs. Ensure every deployed model meets audit, compliance, and cybersecurity standards. Collaborate with Security, Risk, Audit, and RBI compliance Keyskills required: Min 7 Yrs of exp is required Python, TensorFlow, PyTorch, FastAPI - DevOps (Kubernetes, Docker), CI/CD for ML - Cloud ML pipelines (GCP, Azure, AWS) - Familiarity with real-time logging and incident systems NLP- HuggingFace, IndicNLP, BERT family models - Tokenization, NER, sentence parsing for regional text - Fast training & inference for multilingual environments ML explainability, fairness, adversarial defense - Secure model deployment (threat models, RBAC) - Integration with audit logs, version tracking

Posted 1 week ago

Apply

12.0 years

0 Lacs

Hyderābād

On-site

Country/Region: IN Requisition ID: 27741 Work Model: Position Type: Salary Range: Location: INDIA - HYDERABAD - BIRLASOFT OFFICE Title: Architect Description: Area(s) of responsibility Job Title: Generative AI Technical Architect Role Overview: Generative AI Architect will design, develop, and implement advanced generative AI solutions that drive business impact. This role offers the opportunity to work at the forefront of AI innovation. The Architect will lead the end-to-end architecture, design, and deployment of scalable generative AI systems. Responsibilities include conceptualizing solutions, selecting models/frameworks, overseeing development, and integrating AI capabilities into platforms. Collaboration with stakeholders to translate complex requirements into high-performance AI solutions is key. Key Responsibilities: Design GenAI Solutions: Lead architecture of generative AI systems, including LLM selection, RAG, and fine-tuning. Azure AI Expertise: Build scalable AI solutions using Azure AI services. Python Development: Write efficient, maintainable Python code for data processing, automation, and APIs. Model Optimization: Enhance model performance, scalability, and cost-efficiency. Data Strategy: Design data pipelines for training/inference using Azure data services. Integration & Deployment: Integrate models into enterprise systems. Implement MLOps, CI/CD (Azure DevOps, GitHub, Jenkins), and containerization (Docker, Kubernetes). Technical Leadership: Guide teams on AI development and deployment best practices. Innovation: Stay updated on GenAI trends. Drive PoCs and pilot implementations. Collaboration: Work with cross-functional teams to align AI solutions with business goals. Communicate technical concepts to non-technical stakeholders. Required Skills & Qualifications: 12–16 years in IT, with 3+ years in GenAI architecture. Technical Proficiency: Azure AI services Python, TensorFlow, PyTorch, Hugging Face, LangChain, LlamaIndex LLMs, transformers, diffusion models Prompt engineering, RAG, vector DBs (Pinecone, Weaviate, Chroma) MLOps, CI/CD, Kubernetes RESTful API development Architecture: Cloud, microservices, design patterns Problem-Solving: Strong analytical and creative thinking Communication: Clear articulation of complex concepts Teamwork: Agile collaboration and project leadership Desirable: Azure AI certifications Experience with AWS LLM fine-tuning Open-source contributions or AI/ML publications

Posted 1 week ago

Apply

5.0 years

4 - 6 Lacs

Chennai

On-site

Flex is the diversified manufacturing partner of choice that helps market-leading brands design, build and deliver innovative products that improve the world. A career at Flex offers the opportunity to make a difference and invest in your growth in a respectful, inclusive, and collaborative environment. If you are excited about a role but don't meet every bullet point, we encourage you to apply and join us to create the extraordinary. Job Summary To support our extraordinary teams who build great products and contribute to our growth, we’re looking to add a Senior Specialist - Indirect Procurement Sourcing in Chennai, India. The Senior Specialist will be based in Chennai. Will be responsible for Indirect procurement operations, specialized in handling MRO / EDM / Facilities / Construction / New Building Projects Procurement, RFQ to support factory & GBS operation, execution of the strategic sourcing process and global policy compliance, setup goals and lead team to drive to achieve them, coach and develop talents. What a typical day looks like: Typically requires an Engineering degree in a related field. A minimum of 5 years of material and manufacturing experience, preferably from Manufacturing Industry (Automobile, Electronics Manufacturing) Demonstrates expert functional, technical and people and/or process management skills as well as customer (external and internal) relationship skills. Demonstrates detailed expertise in very complex functional/technical area or broad breadth of knowledge in multiple areas; understands the strategic impact of the function across sites Ability to read, analyze, and interpret the most complex documents. Ability to respond effectively to the most sensitive inquiries or complaints. Ability to write speeches and articles using original or innovative techniques or style. Ability to make effective and persuasive speeches and presentations on controversial or complex topics to top management, public groups, and/or boards of directors Ability to work with mathematical concepts such as probability and statistical inference to practical situations Ability to define problems, collect data, establish facts and draw valid conclusions. Ability to interpret an extensive variety of technical instructions in mathematical or diagram form and deal with several abstract and concrete variables The experience we’re looking to add to our team: 5 years+ working experience in manufacturing Sector (not service/consultant/trading company) Familiar with international company culture Dedicated role in supply chain management, including 3 years plus working experience in indirect procurement of MRO / EDM Facilities / Construction Projects Financial knowledge and cost management sense Working knowledge in ERP systems What you’ll receive for the great work you provide Health Insurance PTO #RA01 Job Category Global Procurement & Supply Chain Required Skills: Optional Skills: Flex pays for all costs associated with the application, interview or offer process, a candidate will not be asked for any payment related to these costs. Flex is an Equal Opportunity Employer and employment selection decisions are based on merit, qualifications, and abilities. We do not discriminate based on: age, race, religion, color, sex, national origin, marital status, sexual orientation, gender identity, veteran status, disability, pregnancy status, or any other status protected by law. We're happy to provide reasonable accommodations to those with a disability for assistance in the application process. Please email accessibility@flex.com and we'll discuss your specific situation and next steps (NOTE: this email does not accept or consider resumes or applications. This is only for disability assistance. To be considered for a position at Flex, you must complete the application process first).

Posted 1 week ago

Apply

0.0 - 1.0 years

5 - 6 Lacs

Ahmedabad

On-site

Position - 02 Job Location - Ahmedabad Qualification - Bachelor’s or Master’s degree in Computer Science, Data Science, Artificial Intelligence, or a related field, Relevant certifications or course completion (Coursera, edX, etc.) will be an advantage Years of Exp - 0 to 1 year About us Bytes Technolab is a full-range web application Development Company, establishedin the year 2011, having international presence in the USA and Australia and India. Bytes exhibiting excellent craftsmanship in innovative web development, eCommerce solutions, and mobile application development services ever since its inception. Roles & responsibilities Support development and fine-tuning of Large Language Models (LLMs) using open-source or proprietary models (e.g., OpenAI, HuggingFace, LLaMA). Build and optimize Computer Vision pipelines for tasks such as object detection, image classification, and OCR. Design and implement data preprocessing pipelines, including handling structured and unstructured data. Assist in training, evaluation, and deployment of ML/DL models in staging or production environments. Write clean, scalable, and well-documented code for research and experimentation purposes. Collaborate with senior data scientists, ML engineers, and product teams on AI projects. Skills required Strong foundation in Neural Networks, Deep Learning, and ML algorithms. Hands-on experience with Python and libraries such as TensorFlow, PyTorch, OpenCV, or HuggingFace Transformers. Familiarity with LLM architectures (e.g., GPT, BERT, LLaMA) and their fine-tuning or inference techniques. Basic understanding of Computer Vision concepts and real-world use cases. Experience working with data pipelines and handling large datasets (e.g., Pandas, NumPy, data loaders). Knowledge of model evaluation techniques and metrics. Good to Have Experience using Hugging Face, LangChain, OpenCV, or YOLO for CV tasks. Familiarity with Prompt Engineering and Retrieval-Augmented Generation (RAG). Understanding of NLP concepts such as tokenization, embeddings, and vector search. Exposure to ML model deployment (Flask, FastAPI, Streamlit, or AWS/GCP/Azure). Participation in ML competitions (e.g., Kaggle) or personal projects on GitHub. Soft Skills Strong analytical and problem-solving skills. Willingness to learn and work in a collaborative team environment. Good communication and documentation habits.

Posted 1 week ago

Apply

3.0 years

1 - 4 Lacs

Indore

On-site

Job Title: AI/ML Engineer (Python + AWS + REST APIs) Department: Web Location: Indore Job Type: Full-time Experience: 3-5 years Notice Period: 0-15 days (immediate joiners preferred) Work Arrangement: On-site (Work from Office) Overview: Advantal Technologies is seeking a passionate AI/ML Engineer to join our team in building the core AI-driven functionality an intelligent visual data encryption system. The role involves designing, training, and deploying AI models (e.g., CLIP, DCGANs, Decision Trees), integrating them into a secure backend, and operationalizing the solution via AWS cloud services and Python-based APIs. Key Responsibilities: AI/ML Development  Design and train deep learning models for image classification and sensitivity tagging using CLIP, DCGANs, and Decision Trees.  Build synthetic datasets using DCGANs for balancing.  Fine-tune pre-trained models for customized encryption logic.  Implement explainable classification logic for model outputs.  Validate model performance using custom metrics and datasets. API Development  Design and develop Python RESTful APIs using FastAPI or Flask for:  Image upload and classification  Model inference endpoints  Encryption trigger calls  Integrate APIs with AWS Lambda and Amazon API Gateway. AWS Integration  Deploy and manage AI models on Amazon SageMaker for training and real-time inference.  Use AWS Lambda for serverless backend compute.  Store encrypted image data on Amazon S3 and metadata on Amazon RDS (PostgreSQL).  Use AWS Cognito for secure user authentication and KMS for key management.  Monitor job status via CloudWatch and enable secure, scalable API access. Required Skills & Experience: Must-Have  3–5 years of experience in AI/ML (especially vision-based systems).  Strong experience with PyTorch or TensorFlow for model development.  Proficient in Python with experience building RESTful APIs.  Hands-on experience with Amazon SageMaker, Lambda, API Gateway, and S3.  Knowledge of OpenSSL/PyCryptodome or basic cryptographic concepts.  Understanding of model deployment, serialization, and performance tuning. Nice-to-Have  Experience with CLIP model fine-tuning.  Familiarity with Docker, GitHub Actions, or CI/CD pipelines.  Experience in data classification under compliance regimes (e.g., GDPR, HIPAA).  Familiarity with multi-tenant SaaS design patterns. Tools & Technologies:  Python, PyTorch, TensorFlow  FastAPI, Flask  AWS: SageMaker, Lambda, S3, RDS, Cognito, API Gateway, KMS  Git, Docker, Postgres, OpenCV, OpenSSL If interested, please share resume to hr@advantal.ne

Posted 1 week ago

Apply

6.0 years

0 Lacs

Visakhapatnam

On-site

Job Title: Machine Learning Engineer – 3D Graphics Location: Visakhapatnam, UAE Experience: 6+years Job Type: Full-Time Role: We are seeking a highly skilled and innovative Machine Learning Engineer with 3D Graphics expertise . In this role, you will be responsible for developing and optimizing 3D mannequin models using machine learning algorithms, computer vision techniques, and 3D rendering tools. You will collaborate with backend developers, data scientists, and UI/UX designers to create realistic, scalable, and interactive 3D visualization modules that enhance the user experience. Key Responsibilities: 3D Mannequin Model Development: Design and develop 3D mannequin models using ML-based body shape estimation. Implement pose estimation, texture mapping, and deformation models. Use ML algorithms to adjust measurements for accurate sizing and fit. Machine Learning & Computer Vision: Develop and fine-tune ML models for body shape recognition, segmentation, and fitting. Implement pose detection algorithms using TensorFlow, PyTorch, or OpenCV. Use GANs or CNNs for realistic 3D texture generation. 3D Graphics & Visualization: Create interactive 3D rendering pipelines using Three.js, Babylon.js, or Unity. Optimize mesh processing, lighting, and shading for real-time rendering. Use GPU-accelerated techniques for rendering efficiency. Model Optimization & Performance: Optimize inference pipelines for faster real-time rendering. Implement multi-threading and parallel processing for high performance. Utilize cloud infrastructure (AWS/GCP) for distributed model training and inference. Collaboration & Documentation: Collaborate with UI/UX designers for seamless integration of 3D models into web and mobile apps. Maintain detailed documentation for model architecture, training processes, and rendering techniques. Key Skills & Qualifications: Experience: 5+ years in Machine Learning, Computer Vision, and 3D Graphics Development. Technical Skills: Proficiency in Django, Python, TensorFlow, PyTorch, and OpenCV. Strong expertise in 3D rendering frameworks: Three.js, Babylon.js, or Unity. Experience with 3D model formats (GLTF, OBJ, FBX). Familiarity with Mesh Recovery, PyMAF, and SMPL models. ML & Data Skills: Hands-on experience with GANs, CNNs, and RNNs for texture and pattern generation. Experience with 3D pose estimation and body measurement algorithms. Cloud & Infrastructure: Experience with AWS (SageMaker, Lambda) or GCP (Vertex AI, Cloud Run). Knowledge of Docker and Kubernetes for model deployment. Graphics & Visualization: Knowledge of 3D rendering engines with shader programming. Experience in optimization techniques for rendering large 3D models. Soft Skills: Strong problem-solving skills and attention to detail. Excellent collaboration and communication skills. Interested candidates can send their updated resume to: careers@onliestworld.com Job Type: Full-time

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Job Family Data Science & Analysis (India) Travel Required Up to 10% Clearance Required None What You Will Do Design, train, and fine-tune advanced foundational models (text, audio, vision) using healthcare-and other relevant datasets, focusing on accuracy and context relevance. Collaborate with cross-functional teams (Business, engineering, IT) to seamlessly integrate AI/ML technologies into our solution offerings. Deploy, monitor, and manage AI models in a production environment, ensuring high availability, scalability, and performance. Continuously research and evaluate the latest advancements in AI/ML and industry trends to drive innovation. Develop and maintain comprehensive documentation for AI models, including development, training, fine-tuning, and deployment procedures. Provide technical guidance and mentorship to junior AI engineers and team members. Collaborate with stakeholders to understand business needs and translate them into technical requirements for model fine-tuning and development. Select and curate appropriate datasets for fine-tuning foundational models to address specific use cases. Ensure AI solutions can seamlessly integrate with existing systems and applications. What You Will Need Bachelors or master’s in computer science, Artificial Intelligence, Machine Learning, or a related field. 4 to 6 years of hands-on experience in AI/ML, with a demonstrable track record of training and deploying LLMs and other machine learning models. Strong proficiency in Python and familiarity with popular AI/ML frameworks (TensorFlow, PyTorch, Hugging Face Transformers, etc.). Practical experience deploying and managing AI models in production environments, including expertise in serving and inference frameworks (Triton, TensorRT, VLLM, TGI, etc.). Experience in Voice AI applications, a solid understanding of healthcare data standards (FHIR, HL7, EDI) and regulatory compliance (HIPAA, SOC2) is preferred. Excellent problem-solving and analytical abilities, capable of tackling complex challenges and evaluating multiple factors. Exceptional communication and collaboration skills, enabling effective teamwork in a dynamic environment. Worked on a minimum of 2 AI/LLM projects from the beginning to the end with proven value for business. What Would Be Nice To Have Experience with cloud computing platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes) is a plus. Familiarity with MLOps practices for continuous integration, continuous deployment (CI/CD), and automated monitoring of AI models. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee.

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Havas CSA is seeking a Data Scientist with 2-4 years of experience to contribute to advanced analytics and predictive modelling initiatives. The ideal candidate will combine strong statistical knowledge with practical business understanding to help develop and implement models that drive customer value and business growth. Responsibilities: Implement and maintain customer analytics models including CLTV prediction, propensity modelling, and churn prediction Support the development of customer segmentation models using clustering techniques and behavioural analysis Assist in building and maintaining survival models to analyze customer lifecycle events Work with large-scale datasets using BigQuery and Snowflake Develop and validate machine learning models using Python and cloud-based ML platforms, specifically BQ ML, ModelGarden and Amazon Bedrock Help transform model insights into actionable business recommendations Collaborate with analytics and activation teams to implement model outputs Present analyses to stakeholders in clear, actionable formats Qualifications: Bachelor's or master’s degree in Statistics, Mathematics, Computer Science, or related quantitative field 1-2 years’ experience in applied data science, preferably in marketing/retail Experience in developing and implementing machine learning models Strong understanding of statistical concepts and experimental design Ability to communicate technical concepts to non-technical audiences Familiarity with agile development methodologies Technical Skills: Advanced proficiency in: SQL and data warehouses (BigQuery, Snowflake) Python for statistical modeling Machine learning frameworks (scikit-learn, TensorFlow) Statistical analysis and hypothesis testing Data visualization tools (Matplotlib, Seaborn) Version control systems (Git) Understanding of Google Cloud Function and Cloud Run Experience with: Customer lifetime value modeling RFM analysis and customer segmentation Survival analysis and hazard modeling A/B testing and causal inference Feature engineering and selection Model validation and monitoring Cloud computing platforms (GCP/AWS/Azure) Key Projects & Deliverables Support development and maintenance of CLTV models Contribute to customer segmentation models incorporating behavioral and transactional data Implement survival models to predict customer churn Support the development of attribution models for marketing effectiveness Help develop recommendation engines for personalized customer experiences Assist in creating automated reporting and monitoring systems Soft Skills Strong analytical and problem-solving abilities Good communication and presentation skills Business acumen Collaborative team player Strong organizational skills Ability to translate business problems into analytical solutions Growth Opportunities Work on innovative data science projects for major brands Develop expertise in cutting-edge ML technologies Learn from experienced data science leaders Contribute to impactful analytical solutions Opportunity for career advancement We offer competitive compensation, comprehensive benefits, and the opportunity to work with leading brands while solving complex analytical challenges. Join our team to grow your career while making a significant impact through data-driven decision making. Contract Type : Permanent Here at Havas across the group we pride ourselves on being committed to offering equal opportunities to all potential employees and have zero tolerance for discrimination. We are an equal opportunity employer and welcome applicants irrespective of age, sex, race, ethnicity, disability and other factors that have no bearing on an individual’s ability to perform their job.

Posted 1 week ago

Apply

0 years

0 Lacs

Bhopal, Madhya Pradesh, India

Remote

Internship Opportunity – Computer Vision Engineer Location: Remote Duration: 3 to 6 months Company: Logiclens Solutions ***Note: This is an unpaid internship. Post Internship we may provide full time opportunity if the candidate performs well.*** About Us: Logiclens Solutions is a cutting-edge AI and video analytics company delivering real-time surveillance intelligence to businesses. We specialize in computer vision applications such as object detection, facial recognition, apparatus detection, people tracking, and automated reporting through AI-driven dashboards. Role: Computer Vision Intern Key Responsibilities: Assist in designing and implementing computer vision pipelines using OpenCV and other libraries. Build and optimize object detection and image/video processing models. Integrate machine learning models into scalable web services using FastAPI or Flask . Develop and maintain frontend dashboards using React.js , Tailwind CSS , HTML/CSS . Work with RESTful APIs and manage seamless backend-frontend communication. Handle database operations using MongoDB and MySQL . Deploy applications using Docker containers on AWS (EC2, S3, etc.) . Contribute to CI/CD pipelines using GitHub Actions or GitLab CI/CD. Collaborate using Git for version control and code reviews. Required Skills: Computer Vision: OpenCV (image/video processing, object detection) Languages: Python, JavaScript Web Frameworks: FastAPI, Flask, Node.js, Express.js Frontend: React.js, HTML, CSS, Tailwind CSS Databases: MongoDB, MySQL DevOps & Deployment Cloud: AWS (EC2, S3, etc.) Containers: Docker CI/CD: Familiarity with GitHub Actions, GitLab CI/CD Version Control: Git & GitHub API: RESTful API design and integration Bonus Skills: Strong understanding of cloud-based deployments (especially AWS) Exposure to computer vision model training and optimization Experience with ML model inference pipelines What We Offer: Real-world exposure to AI-driven projects in surveillance and analytics Opportunity to work with a skilled tech team in production-level environments Certificate, letter of recommendation, and potential full-time opportunity based on performance

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Flex is the diversified manufacturing partner of choice that helps market-leading brands design, build and deliver innovative products that improve the world. We believe in the power of diversity and inclusion and cultivate a workplace culture of belonging that views uniqueness as a competitive edge and builds a community that enables our people to push the limits of innovation to make great products that create value and improve people's lives. A career at Flex offers the opportunity to make a difference and invest in your growth in a respectful, inclusive, and collaborative environment. If you are excited about a role but don't meet every bullet point, we encourage you to apply and join us to create the extraordinary. To support our extraordinary teams who build great products and contribute to our growth, we’re looking to add a Senior Specialist - Indirect Procurement Sourcing in Chennai, India. The Senior Specialist will be based in Chennai. Will be responsible for Indirect procurement operations, specialized in handling MRO / EDM / Facilities / Construction / New Building Projects Procurement, RFQ to support factory & GBS operation, execution of the strategic sourcing process and global policy compliance, setup goals and lead team to drive to achieve them, coach and develop talents. What a typical day looks like: Typically requires an Engineering degree in a related field. A minimum of 5 years of material and manufacturing experience, preferably from Manufacturing Industry (Automobile, Electronics Manufacturing) Demonstrates expert functional, technical and people and/or process management skills as well as customer (external and internal) relationship skills. Demonstrates detailed expertise in very complex functional/technical area or broad breadth of knowledge in multiple areas; understands the strategic impact of the function across sites Ability to read, analyze, and interpret the most complex documents. Ability to respond effectively to the most sensitive inquiries or complaints. Ability to write speeches and articles using original or innovative techniques or style. Ability to make effective and persuasive speeches and presentations on controversial or complex topics to top management, public groups, and/or boards of directors Ability to work with mathematical concepts such as probability and statistical inference to practical situations Ability to define problems, collect data, establish facts and draw valid conclusions. Ability to interpret an extensive variety of technical instructions in mathematical or diagram form and deal with several abstract and concrete variables The experience we’re looking to add to our team: 5 years+ working experience in manufacturing Sector (not service/consultant/trading company) Familiar with international company culture Dedicated role in supply chain management, including 3 years plus working experience in indirect procurement of MRO / EDM Facilities / Construction Projects Financial knowledge and cost management sense Working knowledge in ERP systems What you’ll receive for the great work you provide Health Insurance PTO #RA01 Site Flex is an Equal Opportunity Employer and employment selection decisions are based on merit, qualifications, and abilities. We celebrate diversity and do not discriminate based on: age, race, religion, color, sex, national origin, marital status, sexual orientation, gender identity, veteran status, disability, pregnancy status, or any other status protected by law. We're happy to provide reasonable accommodations to those with a disability for assistance in the application process. Please email accessibility@flex.com and we'll discuss your specific situation and next steps (NOTE: this email does not accept or consider resumes or applications. This is only for disability assistance. To be considered for a position at Flex, you must complete the application process first).

Posted 1 week ago

Apply

2.0 years

12 - 28 Lacs

Coimbatore, Tamil Nadu, India

On-site

Experience: 3 to 10 Location : Coimbatore Notice Period: Immediate Joiners are Preferred. Note: Minimum 2 years experience into core Gen AI 𝗞𝗲𝘆 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀: Design, develop, and fine-tune Large Language Models (LLMs) for various in-house applications. Implement and optimize Retrieval-Augmented Generation (RAG) techniques to enhance AI response quality. Develop and deploy Agentic AI systems capable of autonomous decision-making and task execution. Build and manage data pipelines for processing, transforming, and feeding structured/unstructured data into AI models. Ensure scalability, performance, and security of AI-driven solutions in production environments. Collaborate with cross-functional teams, including data engineers, software developers, and product managers. Conduct experiments and evaluations to improve AI system accuracy and efficiency. Stay updated with the latest advancements in AI/ML research, open-source models, and industry best practices. 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗱 𝗦𝗸𝗶𝗹𝗹𝘀 & 𝗤𝘂𝗮𝗹𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 Strong experience in LLM fine-tuning using frameworks like Hugging Face, DeepSpeed, or LoRA/PEFT. Hands-on experience with RAG architectures, including vector databases (e.g., Pinecone, ChromaDB, Weaviate, OpenSearch, FAISS). Experience in building AI agents using LangChain, LangGraph, CrewAI, AutoGPT, or similar frameworks. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow. Experience in Python web frameworks such as FastAPI, Django, or Flask. Experience in designing and managing data pipelines using tools like Apache Airflow, Kafka, or Spark. Knowledge of cloud platforms (AWS/GCP/Azure) and containerization technologies (Docker, Kubernetes). Familiarity with LLM APIs (OpenAI, Anthropic, Mistral, Cohere, Llama, etc.) and their integration in applications. Strong understanding of vector search, embedding models, and hybrid retrieval techniques. Experience with optimizing inference and serving AI models in real-time production systems. 𝗡𝗶𝗰𝗲-𝘁𝗼-𝗛𝗮𝘃𝗲 𝗦𝗸𝗶𝗹𝗹𝘀 Experience with multi-modal AI (text, image, audio). Familiarity with privacy-preserving AI techniques and responsible AI frameworks. Understanding of MLOps best practices, including model versioning, monitoring, and deployment automation. Skills: pytorch,rag architectures,opensearch,weaviate,docker,llm fine-tuning,chromadb,apache airflow,lora,python,hybrid retrieval techniques,django,gcp,crewai,opean ai,hugging face,gen ai,pinecone,faiss,aws,autogpt,embedding models,flask,fastapi,llm apis,deepspeed,vector search,peft,langchain,azure,spark,kubernetes,ai gen,tensorflow,real-time production systems,langgraph,kafka

Posted 1 week ago

Apply

5.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Full Stack + AI/ML Position: Full Stack + AI/ML Experience: 2–5 Years Location: Jaipur/Gurgaon Type: Full-Time About the Role: About the Role: We’re looking for a Full Stack Developer with a strong foundation in AI/ML to help us build intelligent, scalable, and user-centric products. You’ll work at the intersection of development and data, building web platforms and integrating machine learning solutions into real-world applications. Key Responsibilities: Develop and maintain full-stack applications using React.js / Next.js and Python (Django/Flask) Integrate machine learning models into production-grade systems Collaborate with data scientists to build APIs for ML outputs Manage backend logic, RESTful APIs, and database architecture (SQL/MongoDB) Deploy scalable services using AWS / GCP / Azure Optimize application performance, data pipelines, and ML inference processes Required Skills: Proficiency in JavaScript (React.js) and Python (Django or Flask) Hands-on experience with AI/ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) Understanding of REST APIs , microservices , and cloud deployment Familiarity with data structures , algorithm design , and model integration Exposure to DevOps tools , version control (Git), and CI/CD pipelines Nice to Have: Experience working with OpenAI APIs, Langchain, or similar LLM frameworks Background in building analytics dashboards or AI-driven web apps Knowledge of containerization (Docker/Kubernetes) Why Join Us? Work on high-impact AI projects from ideation to deployment Be part of a fast-growing, innovation-driven tech team Flexible work culture, ownership, and growth opportunities

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: AI/ML Engineer Location : Hyderabad Onsite Experience : 5+ years Role Overview We are hiring a Mid-Level AI/ML Engineer with 5+ years of experience in designing, developing, and deploying AI/ML solutions. The role involves integrating LLMs , Agentic AI , RAG pipelines , and anomaly detection into cloud/on-prem platforms, enabling natural language interfaces and intelligent automation. Candidates must be hands-on with Python ML stack, own MLOps lifecycle, and be capable of translating cybersecurity problems into scalable ML solutions. Key Responsibilities LLM & Chatbot Integration : Build conversational AI using LLMs with context-awareness, domain adaptation, and natural language interaction. Retrieval-Augmented Generation (RAG) : Implement vector search with semantic retrieval to ground LLM responses with internal data. Agentic AI : Create autonomous agents to execute multi-step actions using APIs, tools, or reasoning chains for automated workflows. Anomaly Detection & UEBA : Develop ML models for user behaviour analytics, threat detection, and alert tuning. NLP & Insights Generation : Transform user queries into actionable security insights, reports, and policy recommendations. MLOps Ownership : Manage end-to-end model lifecycle – training, validation, deployment, monitoring, versioning. Required Skills Strong Python experience with ML frameworks: TensorFlow, PyTorch, scikit-learn . Hands-on with LLMs (OpenAI, HuggingFace, etc.), prompt engineering, fine-tuning, and inference optimization. Experience implementing RAG using FAISS , Pinecone , or similar. Familiarity with LangChain , agentic frameworks , and multi-agent orchestration. Solid understanding of MLOps : Docker, CI/CD, deployment on cloud/on-prem infra. Security-conscious development practices and ability to work with structured/unstructured security data. Preferred Bachelor’s degree in computer science and more preferably Data Science or AI related fields. Experience with cybersecurity use cases: CVE analysis, behaviour analytics, compliance, log processing. Knowledge of open-source LLMs (LLaMA, Mistral, etc.) and cost-efficient deployment methods. Background in chatbots, Rasa, or custom NLP-driven assistants. Exposure to agent tools (LangChain Agents, AutoGPT-style flows) and plugin integration.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies