Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Media Solution Developer – AI/ML & Automation Focus Role Summary We are seeking a technically strong Media Solution Developer to build AI-powered automation solutions that transform digital media operations. This role focuses on applying AI/ML, NLP, neural networks , and computer vision to automate processes such as campaign setup, QA, reporting, and billing. You will work closely with solution architects to bring intelligent designs to life—improving accuracy, efficiency, and scalability across media workflows. A media background is not required , but deep technical expertise is. Key Responsibilities Design and implement AI/ML solutions that automate repetitive and manual tasks in media operations (e.g., campaign setup, anomaly detection in QA, taxonomy validation, asset analysis). Build and deploy models using machine learning, NLP, and computer vision to improve operational efficiency and decision-making. Develop intelligent automation systems and data pipelines in Python, and integrate them with external advertising platforms via APIs (e.g., Meta, DV360, YouTube). Collaborate with solution architects to convert business problems into scalable, production-ready ML automation solutions. Continuously optimize model and system performance, ensuring reliability and responsiveness in automated workflows. Maintain clean, well-documented code with strong adherence to testing, version control, and compliance standards. Contribute to the broader AI-driven automation strategy across media operations. Ideal Profile: 3–5 years of hands-on experience in machine learning, AI engineering, or data science roles, with a focus on automation. Strong skills in Python, with experience using ML frameworks such as TensorFlow, PyTorch, scikit-learn, and NLP libraries like spaCy or Hugging Face. Experience developing: Automation pipelines using AI/ML to replace or optimize manual media tasks NLP models for text classification, validation, or content tagging Computer vision models for creative asset categorization or quality checks Proven ability to work with APIs and cloud ML platforms (e.g., Google Vertex AI, AWS Sagemaker, Azure ML). Strong understanding of automation architecture and performance optimization in production environments. Ability to work in agile teams and collaborate closely with architects and business stakeholders. Nice to Have: Experience with MLOps (e.g., MLflow, Kubeflow) and deployment orchestration tools (e.g., Airflow, Docker, Kubernetes). Exposure to advertising or marketing tech (DSPs, Meta, Google Ads) is a plus—not mandatory. Familiarity with automation principles in RPA tools (e.g., UiPath) is a bonus, though the primary focus is AI-first automation. Exposure to media buying platforms or AdTech/MarTech ecosystems (DSPs, Meta, Google Marketing Platform). Show more Show less
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
At Airtel , we’re not just scaling connectivity—we’re redefining how India experiences digital services. With 400M+ customers across telecom, financial services, and entertainment, our impact is massive. But behind every experience is an opportunity to make it smarter . We're looking for a Product Manager – AI to drive next-gen intelligence for our customers and business. AI is a transformational technology and we are looking or skilled product managers who will work on leveraging AI to power everything from our digital platforms to customer experience. You’ll work at the intersection of machine learning, product design, and systems thinking to deliver AI-driven products that create tangible business impact—fast. What You’ll Do Lead and contribute to AI-Powered Product Strategy Define product vision and strategy for AI-led initiatives that enhance productivity, automate decisions, and personalise user interactions across Airtel platforms. Translate Business Problems into AI Opportunities Partner with operations, engineering, and data science to surface high-leverage AI use cases across workforce management, customer experience, and process automation. Build & Scale ML-Driven Products Define data product requirements, work closely with ML engineers to develop models, and integrate intelligent workflows that continuously learn and adapt. Own Product Execution End-to-End Drive roadmaps, lead cross-functional teams, launch MVPs, iterate based on real-world feedback, and scale solutions with measurable ROI. What You Need to be Successful Influential Communication - Craft clarity from complexity. You can tailor messages for execs, engineers, and field teams alike—translating AI into business value. Strategic Prioritisation - Balance business urgency with technical feasibility. You can decide what not to build, and defend those decisions with data and a narrative Systems Thinking - You can sees the big picture —how decisions in one area ripple across the business, tech stack, and user experience. High Ownership & Accountability - Operate with a founder mindset. You don't wait for direction — you can rally teams, removes blockers, deal with tough stakeholders and drives outcomes. Adaptability - You thrive in ambiguity and pivot quickly without losing sight of long-term vision—key in fast-moving digital organizations. Skills You'll Need AI / ML Fundamentals Understanding of ML model types: Supervised, unsupervised, reinforcement learning Common algorithms: Linear/logistic regression, decision trees, clustering, neural networks Model lifecycle: Training, validation, testing, tuning, deployment, monitoring Understanding of LLMs, transformers, diffusion models, vector search, etc. Familiarity with GenAI product architecture: Retrieval-Augmented Generation (RAG), prompt tuning, fine-tuning Awareness of real-time personalization, recommendation systems, ranking algorithms, etc Data Fluency Understanding Data pipelines Working knowledge of SQL and Python for analysis Understanding of data annotation, labeling, and versioning Ability to define data requirements and assess data readiness AI Product Development Defining ML problem scope: Classification vs. regression vs. ranking vs. generation Model evaluation metrics: Precision, recall, etc. A/B testing & online experimentation for ML-driven experiences ML Infrastructure Awareness Know what it takes to make things work and happen. Model deployment techniques: Batch vs real-time inference, APIs, model serving Monitoring & drift detection: How to ensure models continue performing over time Familiarity with ML platforms/tools: TensorFlow, PyTorch, Hugging Face, Vertex AI, SageMaker, etc. (at a product level) Understanding latency, cost, and resource implications of ML choices AI Ethics & Safety We care deeply about our customers, their privacy and compliance to regulation. Understand Bias and fairness in models: How to detect and mitigate them Explainability & transparency: Importance for user trust and regulation Privacy & security: Understanding implications of sensitive or PII data in AI Alignment and guardrails in generative AI systems Preferred Qualifications Experienced Machine Learning/Artificial Intelligence PMs Experience building 0-1 products, scaled platforms/ecosystem products, or ecommerce Bachelor's degree in Computer Science, Engineering, Information Systems, Analytics, Mathematics Masters degree in Business Why Airtel Digital? Massive Scale : Your products will impact 400M+ users across sectors Real-World Relevance : Solve meaningful problems for our customers — protecting our customers, spam & fraud prevention, personalised experiences, connecting homes. Agility Meets Ambition : Work like a startup with the resources of a telecom giant AI That Ships : We don’t just run experiments. We deploy models and measure real-world outcomes Leadership Access : Collaborate closely with CXOs and gain mentorship from India’s top product and tech leaders Show more Show less
Posted 1 month ago
0.0 years
0 Lacs
Hyderabad, Telangana
On-site
It's fun to work in a company where people truly BELIEVE in what they're doing! We're committed to bringing passion and customer focus to the business. Job Description AI Quality Assurance Lead (Evaluation & Testing) This role requires working from our local Hyderabad office 2-3x a week. Location: Hyderabad, Telangana, India ABOUT THE TEAM The Generative AI Quality & Safety team owns ABC Fitness’s evaluation frameworks, testing pipelines, and compliance tooling for AI-driven features. We partner with product, engineering, and legal teams to ensure every LLM interaction meets rigorous standards for accuracy, safety, and performance. As our AI Quality Assurance Lead, you’ll architect hybrid (automated + human) testing systems, define GenAI quality KPIs, and embed Responsible AI principles across our fitness-tech platform. At ABC Fitness, we love entrepreneurs because we are entrepreneurs. We know how much grit it takes to start your own business and grow it into something that lasts. We roll our sleeves up, we act fast, and we learn together. WHAT YOU’LL DO Design and deploy evaluation pipelines for generative AI systems using tools like OpenAI Evals, Promptfoo, and custom test harnesses. Develop hallucination detection workflows and bias-analysis frameworks for LLM outputs across 10+ languages. Partner with AI researchers to translate model capabilities into testable requirements for product teams. Implement CI/CD-integrated regression testing for AI microservices on AWS/Azure, monitoring model drift and performance degradation. Lead bug triage sessions, prioritizing issues impacting user trust, legal compliance, or revenue. Document QA protocols, failure modes, and root-cause analyses in our internal knowledge base. WHAT YOU’LL NEED 7+ years in QA/testing roles, with 3+ years focused on AI/ML systems (LLMs, recommendation engines, or conversational AI). Hands-on experience with GenAI evaluation tools (LangSmith, Weights & Biases) and statistical analysis (Python, SQL). Proficiency in cloud platforms (AWS SageMaker, Azure ML) and containerized testing environments (Docker, Kubernetes). Deep understanding of Responsible AI principles—fairness, transparency, privacy—and adversarial testing methodologies. Ability to mentor junior engineers and communicate technical risks to non-technical stakeholders. Certifications like AWS Certified Machine Learning Specialty or Microsoft AI Engineer are a plus. WHAT’S IN IT FOR YOU: Purpose led company with a Values focused culture – Best Life, One Team, Growth Mindset Time Off – competitive PTO plans with 15 Earned accrued leave, 12 days Sick leave, and 12 days Casual leave per year 11 Holidays plus 4 Days of Disconnect – once a quarter, we take a collective breather and enjoy a day off together around the globe. #oneteam Group Mediclaim insurance coverage of INR 500,000 for employee + spouse, 2 kids, and parents or parent-in-laws, and including EAP counseling Life Insurance and Personal Accident Insurance Best Life Perk – we are committed to meeting you wherever you are in your fitness journey with a quarterly reimbursement Premium Calm App – enjoy tranquility with a Calm App subscription for you and up to 4 dependents over the age of 16 Support for working women with financial aid towards crèche facility, ensuring a safe and nurturing environment for their little ones while they focus on their careers. We’re committed to diversity and passion, and encourage you to apply, even if you don’t demonstrate all the listed skillsets! ABC’S COMMITMENT TO DIVERSITY, EQUALITY, BELONGING AND INCLUSION: ABC is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We are intentional about creating an environment where employees, our clients and other stakeholders feel valued and inspired to reach their full potential and make authentic connections. We foster a workplace culture that embraces each person’s diversity, including the extent to which they are similar or different. ABC leaders believe that an equitable and inclusive culture is not only the right thing to do, it is a business imperative. Read more about our commitment to diversity, equality, belonging and inclusion at abcfitness.com ABOUT ABC: ABC Fitness (abcfitness.com) is the premier provider of software and related services for the fitness industry and has built a reputation for excellence in support for clubs and their members. ABC is the trusted provider to boost performance and create a total fitness experience for over 41 million members of clubs of all sizes whether a multi-location chain, franchise or an independent gym. Founded in 1981, ABC helps over 31,000 gyms and health clubs globally perform better and more profitably offering a comprehensive SaaS club management solution that enables club operators to achieve optimal performance. ABC Fitness is a Thoma Bravo portfolio company, a private equity firm focused on investing in software and technology companies (thomabravo.com). #LI-HYBRID If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!
Posted 1 month ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Total yrs of exp: 7+ yrs Balewadi,Pune Location Immediate to 30 Days only Responsibilities - Overall 6+ years of experience, out of which in 5+ in AI, ML and Gen AI and related technologies Proven track record of leading and scaling AI/ML teams and initiatives Strong understanding and hands-on experience in AI, ML, Deep Learning, and Generative AI concepts and applications Expertise in ML frameworks such as PyTorch and/or TensorFlow Experience with ONNX runtime, model optimization and hyperparameter tuning Solid Experience of DevOps, SDLC, CI/CD, and MLOps practices - DevOps/MLOps Tech Stack: Docker, Kubernetes, Jenkins, Git, CI/CD, RabbitMQ, Kafka, Spark, Terraform, Ansible, Prometheus, Grafana, ELK stack Experience in production-level deployment of AI models at enterprise scale Proficiency in data preprocessing, feature engineering, and large-scale data handling Expertise in image and video processing, object detection, image segmentation, and related CV tasks Proficiency in text analysis, sentiment analysis, language modeling, and other NLP applications Experience with speech recognition, audio classification, and general signal processing techniques Experience with RAG, VectorDB, GraphDB and Knowledge Graphs Extensive experience with major cloud platforms (AWS, Azure, GCP) for AI/ML deployments. Proficiency in using and integrating cloud-based AI services and tools (e.g., AWS SageMaker, Azure ML, Google Cloud AI) Qualifications - [Education details] Required Skills Strong leadership and team management skills Excellent verbal and written communication skills Strategic thinking and problem-solving abilities Adaptability and adapting to the rapidly evolving AI/ML landscape Strong collaboration and interpersonal skills Ability to translate market needs into technological solutions Strong understanding of industry dynamics and ability to translate market needs into technological solutions Demonstrated ability to foster a culture of innovation and creative problem-solving Preferred Skills Pay range and compensation package - [Pay range or salary or compensation] Equal Opportunity Statement - [Include a statement on commitment to diversity and inclusivity.] Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Responsibilities: Overall 4+ years of experience, out of which in 3+ in AI, ML and Gen AI and related technologies Proven track record of leading and scaling AI/ML teams and initiatives. Strong understanding and hands-on experience in AI, ML, Deep Learning, and Generative AI concepts and applications. Expertise in ML frameworks such as PyTorch and/or TensorFlow Experience with ONNX runtime, model optimization and hyperparameter tuning. Solid Experience of DevOps, SDLC, CI/CD, and MLOps practices - DevOps/MLOps Tech Stack: Docker, Kubernetes, Jenkins, Git, CI/CD, RabbitMQ, Kafka, Spark, Terraform, Ansible, Prometheus, Grafana, ELK stack Experience in production-level deployment of AI models at enterprise scale. Proficiency in data preprocessing, feature engineering, and large-scale data handling. Expertise in image and video processing, object detection, image segmentation, and related CV tasks. Proficiency in text analysis, sentiment analysis, language modeling, and other NLP applications. Experience with speech recognition, audio classification, and general signal processing techniques. Experience with RAG, VectorDB, GraphDB and Knowledge Graphs Extensive experience with major cloud platforms (AWS, Azure, GCP) for AI/ML deployments. Proficiency in using and integrating cloud-based AI services and tools (e.g., AWS SageMaker, Azure ML, Google Cloud AI). Required Skills: Strong leadership and team management skills. Excellent verbal and written communication skills. Strategic thinking and problem-solving abilities. Adaptability and adapting to the rapidly evolving AI/ML landscape. Strong collaboration and interpersonal skills. Ability to translate market needs into technological solutions. Strong understanding of industry dynamics and ability to translate market needs into technological solutions. Demonstrated ability to foster a culture of innovation and creative problem-solving. Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
Kochi, Kerala, India
On-site
Job Title : Generative AI Developer Location : Employment Type : Full-Time Job Overview We are seeking a Generative AI Developer with expertise in AI/ML, deep learning, and large language models (LLMs). The ideal candidate will design, develop, and deploy cutting-edge AI-powered applications, leveraging NLP, computer vision, and generative AI frameworks. If you are passionate about AI and want to work on transformative AI solutions, this role is for you! Key Responsibilities Design, build, and optimize Generative AI models for text, image, and multimodal applications. Work with LLMs, transformers, and diffusion models to create innovative AI-powered solutions. Implement and fine-tune models using frameworks like TensorFlow, PyTorch, and Hugging Face. Develop and integrate AI models into scalable cloud-based applications (AWS, Azure, GCP). Research and apply cutting-edge AI techniques to enhance model performance and efficiency. Collaborate with cross-functional teams to deploy AI solutions in real-world applications. Optimize models for efficiency, accuracy, and low latency in production environments. Required Skills & Experience 3+ years of experience in AI/ML development, with a focus on Generative AI. Proficiency in Python and deep learning frameworks (PyTorch, TensorFlow). Hands-on experience with transformers, LLMs, and generative models (GPT, BERT, Stable Diffusion). Strong understanding of NLP, computer vision, and deep learning architectures. Experience with cloud-based AI services (AWS Sagemaker, Azure AI, Google Vertex AI). Knowledge of MLOps, model deployment, and optimization techniques. Nice To Have Experience with fine-tuning LLMs and prompt engineering. Familiarity with Reinforcement Learning from Human Feedback (RLHF). Exposure to vector databases (FAISS, Pinecone, ChromaDB) for retrieval-augmented generation (RAG). AI certifications (AWS Machine Learning, Google AI, etc.). Why Join Us? Work on state-of-the-art AI innovations. Collaborate with industry-leading AI experts. Opportunities for continuous learning and growth. (ref:hirist.tech) Show more Show less
Posted 1 month ago
0 years
0 Lacs
India
Remote
This Is No Ordinary Role. This Is No Ordinary Code. This is for individuals eager to make their mark—those who've built innovative LLM agents, engineered seamless multi-agent systems, and are driven to redefine what's possible in an AI-first, high-growth environment. What if YOU could develop and fine-tune large language model (LLM) agents that transform entire businesses? What if you could craft next-generation LLM agent solutions using AWS technologies like SageMaker, Bedrock, Lambda, Step Functions, DynamoDB, and Kinesis? Leverage these tools for real-time data insights, enabling seamless inter-agent communication, state transfer, and memory handling—all while collaborating with a team driven by bold risks, creativity, and user-centric innovation. Shape the future of AI with scalable, serverless, multi-agent systems? Sounds exciting? Maybe you're ready to leap in with both feet. But before you do, ask yourself: Are you truly an innovator, unafraid to break apart old assumptions and design AI solutions that challenge the norm? Are you user-obsessed, building experiences that delight while driving business growth? Can you think differently—and code differently—using the latest research in NLP and generative AI to amplify results? Do you embrace calculated risks, learning and adapting from each iteration? Are you hungry to leave a legacy—not just for one product, but for a growing ecosystem of AI-powered transformations? About Us We're a forward-thinking team on a mission to reshape how businesses leverage AI. Our approach is laser-focused on creating transformative NLP solutions, harnessing the power of LLMs, and optimizing them with AWS Sagemaker, Bedrock, and beyond. We don't just design models—we build experiences and ecosystems that redefine user interactions and business processes. Why This Role Is Different Design Your Impact Don't just fine-tune a model—own the entire development stack. Conceptualize, build, deploy, and continuously improve LLM Agents that tackle real-world challenges and drive measurable outcomes. Leverage AWS services such as Container, Lambda, and Bedrock to create scalable, serverless solutions, enabling agents to operate with precision and reliability. C ollaborate to Innovate Work side-by-side with a forward-thinking leadership team that values experimentation, continuous learning, and radical transparency. Collaborate with cross-functional teams to integrate LLM agents into AWS-driven infrastructures, fostering innovative approaches to problem-solving while maintaining a customer-first perspective. We want your bold ideas and your willingness to challenge our assumptions. B uild Across Ecosystems .Design and deploy LLM Agents that scale across business lines, leveraging AWS tools like API Gateway, Step Functions, and DynamoDB for seamless integration into diverse products, services, L everage AI First Utilize cutting-edge generative models, AWS AI/ML services, and AI-powered IDEs to develop and deploy LLM Agents, optimizing performance, scalability, and real-time application outcomes. Leave a Legacy Influence not just the technology stack, but the very way people interact with AI. You'll help craft solutions that endure, shaping the next wave of agent-based design across multiple verticals. Requirements Proven LLM & Agent Development Agent Building Expertise : Hands-on experience in developing and deploying LLM agents with advanced features like tool integration, contextual memory systems, and feedback-driven learning using a custom agent framework Multi-Agent Solutions : Demonstrated expertise in designing and implementing multi-agent systems, enabling agents to collaborate and solve complex tasks efficiently Inter-Agent Communication : Experience in building robust inter-agent communication protocols to ensure seamless coordination between multiple agents State Management: Skilled in designing systems for state transfer and context preservation across multiple agents, ensuring consistent and accurate task execution AWS Bedrock Experience : Experience leveraging AWS Bedrock and related AWS services to build efficient and scalable solutions Containerized Deployments : Familiarity with deploying agents in Docker containerized environments and adhering to best practices Innovative Problem-Solving Research Orientation: Keeps up with advancements in LLMs, experimenting with and implementing new techniques to solve real-world challenges Adaptability: Thrives in a fast-paced environment by learning new methods and unlearning outdated practices AWS Serverless Scalable Design Serverless Proficiency: Proficient in AWS services such as Lambda, Step Functions, DynamoDB, S3, and Kinesis to build event-driven solution stack CI/CD Practices: Experience in maintaining CI/CD pipelines using AWS tools like CodePipeline, CodeBuild, and CodeDeploy. Familiarity with Terraform is a plus Cost Monitoring & Optimization: Experience in monitoring and optimizing deployment costs, with a focus on identifying inefficiencies and implementing resource-saving strategies Culture Fit Bold Risk-Taking: You thrive in an environment that encourages calculated experimentation and embraces occasional failures as opportunities to learn Team Collaboration: You communicate effortlessly with diverse teams and client stakeholders, championing clarity, empathy, and knowledge-sharing Benefits Health Insurance, PTO, and Leave time Ongoing paid professional training and certifications Fully Remote work Opportunity Strong Onboarding & Training program Work Timings - 1 pm -10 pm IST Next Steps We're looking for someone who already embodies the spirit of a boundary-breaking AI Architect—someone who's ready to own ambitious projects and push the boundaries of what LLMs can do. Apply Now : Send us your resume and answer a few key questions about your experience and vision Show Us Your Ingenuity : Be prepared to talk shop on your boldest AI solutions and how you overcame the toughest technical hurdles Collaborate & Ideate : If selected, you'll workshop a real-world scenario with our team—so we can see firsthand how your mind works This is your chance to leave a mark on the future of AI—one LLM agent at a time. We're excited to hear from you! Our Belief We believe extraordinary things happen when technology and human creativity unite. By empowering teams with generative AI, we free them to focus on meaningful relationships, innovative solutions, and real impact. It's more than just code—it's about sparking a revolution in how people interact with information, solve problems, and propel businesses forward. If this resonates with you—if you're driven, daring, and ready to build the next wave of AI innovation—then let's do this. Apply now and help us shape the future. About Expedite Commerce At Expedite Commerce, we believe that people achieve their best when technology enables them to build relationships and explore new ideas. So we build systems that free you up to focus on your customers and drive innovations. We have a great commerce platform that changes the way you do business! See more about us at expeditecommerce.com. You can also read about us on https://www.g2.com/products/expedite-commerce/reviews, and on Salesforce Appexchange/ExpediteCommerce. EEO Statement All qualified applicants to Expedite Commerce are considered for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran's status or any other protected characteristic. Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Experience :- 4 to 9 years Company :- ACL Digital Location :- Pune, Balewadi Work Mode: Hybrid Looking for Immediate joiner's Should have experience in NLP, Gen AI, Any Cloud(AWS or Azure), Computer Vision, Python,Deep Learning Excellent verbal and written communication skills. Must Have: Overall 4+ years of experience, out of which in 3+ in AI, ML and Gen AI and related technologies • Proven track record of leading and scaling AI/ML teams and initiatives. • Strong understanding and hands-on experience in AI, ML, Deep Learning, and Generative AI concepts and applications. • Expertise in ML frameworks such as PyTorch and/or TensorFlow • Experience with ONNX runtime, model optimization and hyperparameter tuning. • Solid Experience of DevOps, SDLC, CI/CD, and MLOps practices - DevOps/MLOps Tech Stack: Docker, Kubernetes, Jenkins, Git, CI/CD, RabbitMQ, Kafka, Spark, Terraform, Ansible, Prometheus, Grafana, ELK stack • Experience in production-level deployment of AI models at enterprise scale. • Proficiency in data preprocessing, feature engineering, and large-scale data handling. • Expertise in image and video processing, object detection, image segmentation, and related CV tasks. • Proficiency in text analysis, sentiment analysis, language modeling, and other NLP applications. • Experience with speech recognition, audio classification, and general signal processing techniques. • Experience with RAG, VectorDB, GraphDB and Knowledge Graphs • Extensive experience with major cloud platforms (AWS, Azure, GCP) for AI/ML deployments. Proficiency in using and integrating cloud-based AI services and tools (e.g., AWS SageMaker, Azure ML, Google Cloud AI). Soft Skills • Strong leadership and team management skills. • Strategic thinking and problem-solving abilities. • Adaptability and adapting to the rapidly evolving AI/ML landscape. • Strong collaboration and interpersonal skills. • Ability to translate market needs into technological solutions. • Strong understanding of industry dynamics and ability to translate market needs into technological solutions. • Demonstrated ability to foster a culture of innovation and creative problem-solving. Show more Show less
Posted 1 month ago
6.0 - 11.0 years
20 - 30 Lacs
Chennai, Bengaluru
Work from Office
AWS AI/ML Expert with 5 to 6 years of hands on experience in designing, developing, and deploying artificial intelligence and machine learning solutions on the AWS cloud platform. The ideal candidate will have strong proficiency in AWS AI services
Posted 1 month ago
5.0 - 7.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Lead Consultant - Cloud Engineer! In this role, you will be responsible for designing, provisioning, and securing scalable cloud infrastructure to support AI/ML and Generative AI workloads. A key focus will be ensuring high availability, cost efficiency, and performance optimization of infrastructure through best practices in architecture and automation. Responsibilities Design and implement secure VPC architecture, subnets, NAT gateways, and route tables. Build and maintain IAC modules for repeatable infrastructure provisioning. Build CI/CD pipelines that support secure, auto-scalable AI deployments using GitHub Actions, AWS CodePipeline , and Lambda triggers. Monitor and tune infrastructure health using AWS CloudWatch, GuardDuty , and custom alerting. Track and optimize cloud spend using AWS Cost Explorer, Trusted Advisor, and usage dashboards. Deploy and manage cloud-native services including SageMaker, Lambda, ECR, API Gateway etc. Implement IAM policies, Secrets Manager, and KMS encryption for secure deployments. Enable logging and monitoring using CloudWatch and configure alerts and dashboards. Set up and manage CloudTrail, GuardDuty , and AWS Config for audit and security compliance. Assist with cost optimization strategies including usage analysis and budget alerting. Support multi-cloud or hybrid integration patterns (e.g., data exchange between AWS and Azure/GCP). Collaborate with MLOps and Data Science teams to translate ML/ GenAI requirements into production-grade, resilient AWS environments. Maintain multi-cloud compatibility as needed (e.g., data egress readiness, common abstraction layers). Be engaging in the design, development and maintenance of data pipelines for various AI use cases Required to actively contribution to key deliverables as part of an agile development team Be collaborating with others to source, analyse, test and deploy data processes. Qualifications we seek in you! Minimum Qualifications Good years of hands-on AWS infrastructure experience in production environments. Degree/qualification in Computer Science or a related field, or equivalent work experience Proficiency in Terraform, AWS CLI, and Python or Bash scripting. Strong knowledge of IAM, VPC, ECS/EKS, Lambda, and serverless computing. Experience supporting AI/ML or GenAI pipelines in AWS (especially for compute and networking). Hands on experience to multiple AI / ML /RAG/LLM workloads and model deployment infrastructure. Exposure to multi-cloud architecture basics (e.g., SSO, networking, blob exchange, shared VPC setups). AWS Certified DevOps Engineer or Solutions Architect - Associate/Professional. Experience in developing, testing, and deploying data pipelines using public cloud. Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills Experience deploying infrastructure in both AWS and another major cloud provider (Azure or GCP). Familiarity with multi-cloud tools (e.g., HashiCorp Vault, Kubernetes with cross-cloud clusters). Strong understanding of DevSecOps best practices and compliance requirements. Exposure to RAG/LLM workloads and model deployment infrastructure. Knowledge of governance and compliance policies, standards, and procedures Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 month ago
9.0 - 12.0 years
16 - 25 Lacs
Hyderabad
Work from Office
Strong knowledge of Python, R, and ML frameworks such as scikit-learn, TensorFlow, PyTorch Experience with cloud ML platforms: SageMaker, Azure ML, Vertex AI LLM Experience such as GPT Hands-on experience with data wrangling, feature engineering, and model optimization Also experienced in developing model wrapers Deep understanding of algorithms including regression, classification, clustering, NLP, and deep learning Familiarity with MLOps tools like MLflow, Kubeflow, or Airflow
Posted 1 month ago
9.0 - 12.0 years
16 - 25 Lacs
Hyderabad
Work from Office
Strong knowledge of Python, R, and ML frameworks such as scikit-learn, TensorFlow, PyTorch Experience with cloud ML platforms: SageMaker, Azure ML, Vertex AI LLM Experience such as GPT Hands-on experience with data wrangling, feature engineering, and model optimization Also experienced in developing model wrapers Deep understanding of algorithms including regression, classification, clustering, NLP, and deep learning Familiarity with MLOps tools like MLflow, Kubeflow, or Airflow
Posted 1 month ago
7.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Location: Hyderabad, Experience: 7-9 Years Employment Type: Full-time Company Description At Antz AI, we lead the AI revolution with a focus on AI Agentic Solutions. Our mission is to integrate intelligent AI Agents and Human-Centric AI solutions into core business processes, driving innovation, efficiency, and growth. Our consulting is grounded in robust data centralization, resulting in significant boosts in decision-making speed and reductions in operational costs. Through strategic AI initiatives, we empower people to achieve more meaningful and productive work. Role Summary: We are seeking a highly experienced and dynamic Senior AI Consultant with 7-9 years of overall experience to join our rapidly growing team. The ideal candidate will be a hands-on technologist with a proven track record of designing, developing, and deploying robust AI solutions, particularly leveraging agentic frameworks. This role demands a blend of deep technical expertise, strong system design capabilities, and excellent customer-facing skills to deliver impactful real-world products. We are looking for an immediate joiner who can hit the ground running. Key Responsibilities: Solution Design & Architecture: Lead the design and architecture of scalable, high-performance AI solutions, emphasizing agentic frameworks (e.g., Agno, Langgraph) and microservices architectures. Hands-on Development: Develop, implement, and optimize AI models, agents, and supporting infrastructure. Write clean, efficient, and well-documented code, adhering to software engineering best practices. Deployment & Operations: Oversee the deployment of AI solutions into production environments, primarily utilizing AWS services. Implement and maintain CI/CD pipelines to ensure seamless and reliable deployments. System Integration: Integrate AI solutions with existing enterprise systems and data sources, ensuring robust data flow and interoperability. Customer Engagement: Act as a key technical liaison with clients, understanding their business challenges, proposing AI-driven solutions, and presenting technical concepts clearly and concisely. Best Practices & Quality: Champion and enforce best practices in coding, testing, security, and MLOps to ensure the delivery of high-quality, maintainable, and scalable solutions. Problem Solving: Diagnose and resolve complex technical issues related to AI model performance, infrastructure, and integration. Mentorship: Provide technical guidance and mentorship to junior team members, fostering a culture of continuous learning and excellence. Required Qualifications: Experience: 7-9 years of overall experience in software development, AI engineering, or machine learning, with a strong focus on deploying production-grade solutions. Agentic Frameworks: Demonstrated hands-on experience with agentic frameworks such as Langchain, Langgraph, Agno, AutoGen , or similar, for building complex AI workflows and autonomous agents. Microservices Architecture: Extensive experience in designing, developing, and deploying solutions based on microservices architectures. Cloud Platforms: Proven expertise in AWS services relevant to AI/ML and microservices (e.g., EC2, S3, Lambda, ECS/EKS, SageMaker, DynamoDB, API Gateway, SQS/SNS). Programming & MLOps: Strong proficiency in Python. Experience with MLOps practices, including model versioning, monitoring, and pipeline automation. System Design: Excellent understanding and practical experience in system design principles, scalability, reliability, and security. Real-World Deployment: A strong portfolio demonstrating successful deployment of AI products or solutions in real-world, production environments. Customer-Facing: Prior experience in customer-facing roles, with the ability to articulate complex technical concepts to non-technical stakeholders and gather requirements effectively. Immediate Availability: Ability to join immediately. Preferred Qualifications / Bonus Points: Experience with other cloud platforms (Azure, GCP). Knowledge of containerization technologies (Docker, Kubernetes). Familiarity with various machine learning domains (NLP, Computer Vision, Generative AI). Contributions to open-source AI projects. Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Operations Engineer About the Role Responsibilities Operations Support Monitor and triage production data pipelines, ingestion jobs, and transformation workflows (e.g. dbt, Fivetran, Snowflake tasks) Manage and resolve data incidents and operational issues, working cross-functionally with platform, data, and analytics teams Develop and maintain internal tools/scripts for observability, diagnostics, and automation of data workflows Participate in on-call rotations to support platform uptime and SLAs Data Platform Engineering Support Help manage infrastructure-as-code configurations (e.g., Terraform for Snowflake, AWS, Airflow) Support user onboarding, RBAC permissioning, and account provisioning across data platforms Assist with schema and pipeline changes, versioning, and documentation Assist with setting up monitoring on new pipelines in metaplane Data & Analytics Engineering Support Diagnosing model failures and upstream data issues Collaborate with analytics teams to validate data freshness, quality, and lineage Coordinate and perform backfills, schema adjustments, and reprocessing when needed Manage operational aspects of source ingestion (e.g., REST APIs, batch jobs, database replication, kafka) ML-Ops & Data Science Infrastructure Collaborate with the data science team to operationalize and support ML pipelines, removing the burden of infrastructure ownership from the team Monitor ML batch and streaming jobs (e.g., model scoring, feature engineering, data preprocessing) Maintain and improve scheduling, resource management, and observability for ML workflows (e.g., using Airflow, SageMaker, or Kubernetes-based tools) Help manage model artifacts, metadata, and deployment environments to ensure reproducibility and traceability Support the transition of ad hoc or experimental pipelines into production-grade services Qualifications Required Qualifications At least 4 years of experience in data engineering, DevOps, or data operations roles Solid understanding of modern data stack components (Snowflake, dbt, Airflow, Fivetran, cloud storage) Proficiency with SQL and comfort debugging data transformations or analytic queries Basic scripting/programming skills (e.g., Python, Bash) for automation and tooling Familiarity with version control (Git) and CI/CD pipelines for data projects Strong troubleshooting and communication skills — you enjoy helping others and resolving issues Experience with infrastructure-as-code (Terraform, CloudFormation) Familiarity with observability tools such as datadog Exposure to data governance tools and concepts (e.g., data catalogs, lineage, access control) Understanding of ELT best practices and schema evolution in distributed data systems Show more Show less
Posted 1 month ago
9.0 years
0 Lacs
India
Remote
We are looking for a visionary and technically adept AI Technical Architect to lead the design, development, and deployment of scalable AI/ML solutions across the enterprise. This role blends deep technical expertise with strategic leadership to deliver innovative, secure, and ethical AI systems. You will work closely with cross-functional teams to architect intelligent platforms that align with organizational goals and drive meaningful business impact. Location: Hyderabad/ Remote Experience: 9+ Years Key Responsibilities Architect and build cloud-native AI/ML platforms using AWS (SageMaker, Bedrock), Azure (ML, OpenAI), or GCP (Vertex AI, BigQuery, LangChain). Lead the end-to-end development of cutting-edge AI solutions, including Retrieval-Augmented Generation (RAG) pipelines, summarization tools, and virtual assistants. Design and implement robust MLOps and LLMOps frameworks to support CI/CD, model versioning, retraining, monitoring, and production observability. Ensure all AI solutions adhere to Responsible AI practices, focusing on explainability, fairness, bias mitigation, auditability, and compliance with regulations (e.g., GDPR, HIPAA). Integrate AI models seamlessly with enterprise data pipelines, APIs, and business applications to support real-time and batch inference workflows. Oversee the full ML lifecycle—from data preparation and feature engineering to model training, tuning, deployment, and monitoring. Collaborate with cross-functional teams including data engineers, product managers, and business stakeholders to ensure alignment with strategic initiatives. Leverage deep learning architectures (CNNs, RNNs, Transformers) to address use cases in NLP, computer vision, and forecasting. Define AI governance frameworks, including audit trails, bias detection protocols, and model transparency mechanisms. Provide mentorship and technical leadership to data scientists and AI engineers. Continuously evaluate new AI technologies, research advancements, and industry best practices to evolve architectural standards and drive innovation. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, AI/ML, or a related field. 9+ years of overall software development experience, with at least 3 years in AI/ML architecture or technical leadership. Expertise in Python and ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. Proven experience deploying AI models at scale in production environments. Strong grasp of modern data architectures including data lakes, data warehouses, and ETL/ELT pipelines. Proficient with containerization (Docker), orchestration (Kubernetes), and cloud-based ML platforms (AWS Sagemaker, Azure ML, GCP Vertex AI). Hands-on experience with MLOps tools such as MLflow, Kubeflow, and Airflow. Familiarity with LLMs, prompt engineering, and language model fine-tuning is a plus. Exceptional communication skills with the ability to influence and collaborate across teams. Demonstrated experience mentoring engineers and leading technical initiatives. Candidates with prior experience in Healthcare or Telecom domains will be strongly preferred , especially those who have delivered domain-specific AI solutions aligned to regulatory, operational, or customer engagement needs. Preferred Qualifications Industry certifications in AI/ML or cloud architecture (e.g., AWS Machine Learning Specialty, Google Cloud ML Engineer). Experience with RAG pipelines and vector databases like Pinecone, FAISS, or Weaviate. Deep understanding of ethical AI principles and regulatory compliance requirements. Prior involvement in architectural reviews, technical steering committees, or enterprise-wide AI initiatives. Why Veltris? AI-First Company: Veltris is built on the foundation of AI. We enable clients to build advanced products using the latest technologies including Machine Learning (ML), Deep Learning (DL – CV, NLP), and MLOps. Proprietary AI Framework: We've developed a full-stack AI framework to accelerate clients' ML model development and deployment lifecycle. Explore: Insight.AI NVIDIA Partnership: We are a part of the NVIDIA Partner Network as a Professional Services Partner, delivering solutions powered by NVIDIA’s GPU and software ecosystem. Read more Cutting-Edge Research: We collaborate with academic institutions and research organizations to address complex problems in domains like medical imaging, biopharma, life sciences, legal, retail, and agriculture. Empowered Work Environment: Every team member, including junior engineers, works on impactful features in complex domains, contributing to real-world client success. Culture: Open communication, flat hierarchy, and a strong culture of ownership and innovation. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Required Skills & Qualifications Proficiency in AI/ML & NLP: Experience with Large Language Models (LLMs), NLP techniques, and prompt engineering. Cloud & AWS Expertise: Hands-on experience with AWS AI/ML services (such as AWS Bedrock, OpenSearch, S3, Lambda, SageMaker). Programming & APIs: Strong skills in Python, API development, and frameworks such as FastAPI, Flask, or Django. Database & SQL: Experience with SQL databases (PostgreSQL, MySQL, or RedShift) and writing efficient queries. Data Engineering: Familiarity with data pipelines, feature engineering, and retrieval-augmented generation (RAG). Experience with LLM Integration: Knowledge of fine-tuning models and integrating them into real-world applications. Version Control & CI/CD: Experience with Git, Docker, and CI/CD pipelines. Preferred Qualifications Prior experience implementing AI-powered search solutions. Familiarity with IoT ecosystems and multi-tenant applications. Knowledge of Vector Databases for semantic search. Show more Show less
Posted 1 month ago
4.0 - 9.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In quality engineering at PwC, you will focus on implementing leading practice standards of quality in software development and testing processes. In this field, you will use your experience to identify and resolve defects, optimise performance, and enhance user experience. The Opportunity When you join PwC Acceleration Centers (ACs), you step into a pivotal role focused on actively supporting various Acceleration Center services, from Advisory to Assurance, Tax and Business Services. In our innovative hubs, you’ll engage in challenging projects and provide distinctive services to support client engagements through enhanced quality and innovation. You’ll also participate in dynamic and digitally enabled training that is designed to grow your technical and professional skills. As part of the AI Engineering team you will design, develop, and scale AI-driven web applications and platforms. As a Senior Associate you will analyze complex problems, mentor others, and maintain rigorous standards while building meaningful client connections and navigating increasingly complex situations. This role is well-suited for engineers eager to blend their entire stack development skills with the emerging world of AI and machine learning in a fast-paced, cross-functional environment. Responsibilities Design and implement AI-driven web applications and platforms Analyze complex challenges and develop impactful solutions Mentor junior team members and foster their professional growth Maintain exemplary standards of quality in every deliverable Build and nurture meaningful relationships with clients Navigate intricate situations and adapt to evolving requirements Collaborate in a fast-paced, cross-functional team environment Leverage broad stack development skills in AI and machine learning projects What You Must Have Bachelor's Degree in Computer Science, Software Engineering, or a related field 4-9 years of experience Oral and written proficiency in English required What Sets You Apart Bachelor's Degree in Computer Science, Engineering Skilled in modern frontend frameworks like React or Angular Demonstrating hands-on experience with GenAI applications Familiarity with LLM orchestration tools Understanding of Responsible AI practices Experience with DevOps tools like Terraform and Kubernetes Knowledge of MLOps capabilities Security experience with OpenID Connect and OAuth2 Experience in AI/ML R&D or cross-functional teams Preferred Knowledge/Skills Role Overview We are looking for a skilled and proactive Full Stack Engineer to join our AI Engineering team. You will play a pivotal role in designing, developing, and scaling AI-driven web applications and platforms. This role is ideal for engineers who are passionate about blending full stack development skills with the emerging world of AI and machine learning, and who thrive in cross-functional, fast-paced environments. Key Responsibilities Develop and maintain scalable web applications and APIs using Python (FastAPI, Flask, Django) and modern frontend frameworks (React.js, Angular.js). Build intuitive, responsive UIs using JavaScript/TypeScript, CSS3, Bootstrap, and Material UI for AI-powered products. Collaborate closely with product teams to deliver GenAI/RAG-based solutions. Design backend services for: Data pipelines (Azure Data Factory, Data Lake, Delta Lake) Model inference Embedding and metadata storage (SQL, NoSQL, Vector DBs) Optimize application performance for AI inference and data-intensive workloads. Integrate third-party APIs, model-hosting platforms (OpenAI, Azure ML, AWS SageMaker), and vector databases. Implement robust CI/CD pipelines using Azure DevOps, GitHub Actions, or Jenkins. Participate in architectural reviews and contribute to design best practices across the engineering organization. Required Skills & Experience 4–9 years of professional full-stack engineering experience. Bachelor's degree in Computer Science, Engineering, or related technical field (BE/BTech/MCA) Strong Python development skills, particularly with FastAPI, Flask, or Django. Experience with data processing using Pandas. Proficient in JavaScript/TypeScript with at least one modern frontend framework (React, Angular). Solid understanding of RESTful and GraphQL API design. Experience with at least one cloud platform: Azure: Functions, App Service, AI Search, Service Bus, AI Foundry AWS: Lambda, S3, SageMaker, EC2 Hands-on experience building GenAI applications using RAG and agent frameworks. Database proficiency with: Relational databases: PostgreSQL, SQL Server NoSQL databases: MongoDB, DynamoDB Vector stores for embedding retrieval Familiarity with LLM orchestration tools: LangChain, AutoGen, LangGraph, Crew AI, A2A, MCP Understanding of Responsible AI practices and working knowledge of LLM providers (OpenAI, Anthropic, Google PaLM, AWS Bedrock) Good To Have Skills DevOps & Infrastructure: Terraform, Kubernetes, Docker, Jenkins MLOps capabilities: model versioning, inference monitoring, automated retraining Security experience with OpenID Connect, OAuth2, JWT Deep experience with data platforms: Databricks, Microsoft Fabric Prior experience in AI/ML R&D or working within cross-functional product teams Show more Show less
Posted 1 month ago
4.0 - 9.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in data science and machine learning engineering at PwC will focus on leveraging advanced analytics and machine learning techniques to extract insights from large datasets and drive data-driven decision making. You will work on developing predictive models, conducting statistical analysis, and creating data visualisations to solve complex business problems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Role Overview We are seeking a Senior Associate – AI Engineer / MLOps / LLMOps with a passion for building resilient, cloud-native AI systems. In this role, you’ll collaborate with data scientists, researchers, and product teams to build infrastructure, automate pipelines, and deploy models that power intelligent applications at scale. If you enjoy solving real-world engineering challenges at the convergence of AI and software systems, this role is for you. Key Responsibilities Architect and implement AI/ML/GenAI pipelines, automating end-to-end workflows from data ingestion to model deployment and monitoring. Develop scalable, production-grade APIs and services using FastAPI, Flask, or similar frameworks for AI/LLM model inference. Design and maintain containerized AI applications using Docker and Kubernetes. Operationalize Large Language Models (LLMs) and other GenAI models via cloud-native deployment (e.g., Azure ML, AWS Sagemaker, GCP Vertex AI). Manage and monitor model performance post-deployment, applying concepts of MLOps and LLMOps including model versioning, A/B testing, and drift detection. Build and maintain CI/CD pipelines for rapid and secure deployment of AI solutions using tools such as GitHub Actions, Azure DevOps, GitLab CI. Implement security, governance, and compliance standards in AI pipelines. Optimize model serving infrastructure for speed, scalability, and cost-efficiency. Collaborate with AI researchers to translate prototypes into robust production-ready solutions. Required Skills & Experience 4 to 9 years of hands-on experience in AI/ML engineering, MLOps, or DevOps for data science products. Bachelor's degree in Computer Science, Engineering, or related technical field (BE/BTech/MCA). Strong software engineering foundation with hands-on experience in Python, Shell scripting, and familiarity with ML libraries (scikit-learn, transformers, etc.). Experience deploying and maintaining LLM-based applications, including prompt orchestration, fine-tuned models, and agentic workflows. Deep understanding of containerization and orchestration (Docker, Kubernetes, Helm). Experience with CI/CD pipelines, infrastructure-as-code tools (Terraform, CloudFormation), and automated deployment practices. Proficiency in cloud platforms: Azure (preferred), AWS, or GCP – including AI/ML services (e.g., Azure ML, AWS Sagemaker, GCP Vertex AI). Experience managing and monitoring ML lifecycle (training, validation, deployment, feedback loops). Solid understanding of APIs, microservices, and event-driven architecture. Experience with model monitoring/orchestration tools (e.g, Kubeflow, MLflow). Exposure to LLMOps-specific orchestration tools such as LangChain, LangGraph, Haystack, or PromptLayer. Experience with serverless deployments (AWS Lambda, Azure Functions) and GPU-enabled compute instances. Knowledge of data pipelines using tools like Apache Airflow, Prefect, or Azure Data Factory. Exposure to logging and observability tools like ELK stack, Azure Monitor, or Datadog. Good to Have Experience implementing multi-model architecture, serving GenAI models alongside traditional ML models. Knowledge of data versioning tools like DVC, Delta Lake, or LakeFS. Familiarity with distributed systems and optimizing inference pipelines for throughput and latency. Experience with infrastructure cost monitoring and optimization strategies for large-scale AI workloads. It would be great if the candidate has exposure to full-stack ML/DL. Soft Skills & Team Expectations Strong communication and documentation skills; ability to clearly articulate technical concepts to both technical and non-technical audiences. Demonstrated ability to work independently as well as collaboratively in a fast-paced environment. A builder's mindset with a strong desire to innovate, automate, and scale. Comfortable in an agile, iterative development environment. Willingness to mentor junior engineers and contribute to team knowledge growth. Proactive in identifying tech stack improvements, security enhancements, and performance bottlenecks. Show more Show less
Posted 1 month ago
4.0 - 9.0 years
0 Lacs
Andhra Pradesh, India
On-site
At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in data science and machine learning engineering at PwC will focus on leveraging advanced analytics and machine learning techniques to extract insights from large datasets and drive data-driven decision making. You will work on developing predictive models, conducting statistical analysis, and creating data visualisations to solve complex business problems. The Opportunity When you join PwC Acceleration Centers (ACs), you step into a pivotal role focused on actively supporting various Acceleration Center services, from Advisory to Assurance, Tax and Business Services. In our innovative hubs, you’ll engage in challenging projects and provide distinctive services to support client engagements through enhanced quality and innovation. You’ll also participate in dynamic and digitally enabled training that is designed to grow your technical and professional skills. As part of the Data Science team you will design and deliver scalable AI applications that drive business transformation. As a Senior Associate you will analyze complex problems, mentor junior team members, and build meaningful client connections while navigating the evolving landscape of AI and machine learning. This role offers the chance to work on innovative technologies, collaborate with cross-functional teams, and contribute to creative solutions that shape the future of the industry. Responsibilities Design and implement scalable AI applications to facilitate business transformation Analyze intricate problems and propose practical solutions Mentor junior team members to enhance their skills and knowledge Establish and nurture meaningful relationships with clients Navigate the dynamic landscape of AI and machine learning Collaborate with cross-functional teams to drive innovative solutions Utilize advanced technologies to improve project outcomes Contribute to the overall strategy of the Data Science team What You Must Have Bachelor's Degree in Computer Science, Engineering, or equivalent technical discipline 4-9 years of experience in Data Science/ML/AI roles Oral and written proficiency in English required What Sets You Apart Proficiency in Python and data science libraries Hands-on experience with Generative AI and prompt engineering Familiarity with cloud platforms like Azure, AWS, GCP Understanding of production-level AI systems and CI/CD Experience with Docker, Kubernetes for ML workloads Knowledge of MLOps tooling and pipelines Demonstrated track record of delivering AI-driven solutions Preferred Knowledge/Skills Please reference About PwC CTIO – AI Engineering PwC’s Commercial Technology and Innovation Office (CTIO) is at the forefront of emerging technology, focused on building transformative AI-powered products and driving enterprise innovation. The AI Engineering team within CTIO is dedicated to researching, developing, and operationalizing cutting-edge technologies such as Generative AI, Large Language Models (LLMs), AI Agents, and more. Our mission is to continuously explore what's next—enabling business transformation through scalable AI/ML solutions while remaining grounded in research, experimentation, and engineering excellence.ill categories for job description details. Role Overview We are seeking a Senior Associate – Data Science/ML/DL/GenAI to join our high-impact, entrepreneurial team. This individual will play a key role in designing and delivering scalable AI applications, conducting applied research in GenAI and deep learning, and contributing to the team’s innovation agenda. This is a hands-on, technical role ideal for professionals passionate about AI-driven transformation. Key Responsibilities Design, develop, and deploy machine learning, deep learning, and Generative AI solutions tailored to business use cases. Build scalable pipelines using Python (and frameworks such as Flask/FastAPI) to operationalize data science models in production environments. Prototype and implement solutions using state-of-the-art LLM frameworks such as LangChain, LlamaIndex, LangGraph, or similar. Also developing applications in streamlit/chainlit for demo purposes. Design advanced prompts and develop agentic LLM applications that autonomously interact with tools and APIs. Fine-tune and pre-train LLMs (HuggingFace and similar libraries) to align with business objectives. Collaborate in a cross-functional setup with ML engineers, architects, and product teams to co-develop AI solutions. Conduct R&D in NLP, CV, and multi-modal tasks, and evaluate model performance with production-grade metrics. Stay current with AI research and industry trends; continuously upskill to integrate the latest tools and methods into the team’s work. Required Skills & Experience 4 to 9 years of experience in Data Science/ML/AI roles. Bachelor’s degree in Computer Science, Engineering, or equivalent technical discipline (BE/BTech/MCA). Proficiency in Python and related data science libraries: Pandas, NumPy, SciPy, Scikit-learn, TensorFlow, PyTorch, Keras, etc. Hands-on experience with Generative AI, including prompt engineering, LLM fine-tuning, and deployment. Experience with Agentic LLMs and task orchestration using tools like LangGraph or AutoGPT-like flows. Strong knowledge of NLP techniques, transformer architectures, and text analysis. Proven experience working with cloud platforms (preferably Azure; AWS/GCP also considered). Understanding of production-level AI systems including CI/CD, model monitoring, and cloud-native architecture. (Need not develop from scratch) Familiarity with ML algorithms: XGBoost, GBM, k-NN, SVM, Decision Forests, Naive Bayes, Neural Networks, etc. Exposure to deploying AI models via APIs and integration into larger data ecosystems. Strong understanding of model operationalization and lifecycle management. Experience with Docker, Kubernetes, and containerized deployments for ML workloads. Use of MLOps tooling and pipelines (e.g., MLflow, Azure ML, SageMaker, etc.). Experience in full-stack AI applications, including visualization (e.g., PowerBI, D3.js). Demonstrated track record of delivering AI-driven solutions as part of large-scale systems. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
Join SADA, an Insight company as a Senior AI Engineer! Your Mission We're seeking a highly skilled and visionary Senior AI Engineer to pioneer and lead our AI initiatives, establishing a robust AI foundation across our organization. As the go-to expert, you'll be critical in architecting and implementing advanced AI-driven solutions, leveraging platforms like CCAIP , Vertex AI , and Generative AI to influence product roadmaps and drive innovation. This role focuses significantly on designing, implementing, and deploying sophisticated AI-powered solutions for Contact Centers (CCAI) for our clients, alongside building robust data solutions . You'll also provide essential technical leadership and mentorship to ensure the successful delivery of projects. Responsibilities: Solution Design & Architecture: Lead the technical design and architecture of complex AI/ML solutions, including intricate conversational AI flows. This involves deeply leveraging the Google Cloud Platform (GCP) to architect solutions within Dialogflow CX, integrating with other GCP services, and designing robust data solutions using BigQuery and other relevant tools. Provide deep technical guidance specific to the CCAI ecosystem and ensure architectural alignment with Google Cloud best practices. Hands-on Development & Deployment: Drive hands-on development and deployment of complex AI components, including advanced Conversational AI components on Google CCAI. Expertly utilize Dialogflow CX, Vertex AI (including generative AI capabilities), and GCP compute services for custom integrations. Generative AI & LLMs: Implement and integrate generative AI models and Large Language Models (LLMs), including custom development and deployment, for enhanced conversational experiences and broader AI applications. Explore multimodal use cases involving audio, video, or images. CCAI Platform Management: Lead significant projects such as Dialogflow ES to CX migrations, ensuring seamless transition and optimization of conversational agents. Integrate AI solutions with various CCaaS (Contact Center as a Service) platforms like UJET/CCAIP. Data Solutions: Architect and implement robust data pipelines and solutions using BigQuery and other relevant tools for AI model training, inference, and analytics, particularly for conversational data. Technical Leadership & Mentorship: Provide deep technical guidance and mentorship to junior engineers and developers in their areas of expertise, sharing best practices and troubleshooting techniques, and fostering a culture of knowledge sharing and continuous improvement. Pre-Sales Support Contribution: Collaborate as a lead technical expert in strategic pre-sales engagements for Google CCAI, delivering expert solution demonstrations, crafting compelling technical proposals, and conducting in-depth workshops to address complex client needs. Innovation & Research: Proactively research and evaluate the latest advancements in AI/ML, generative AI, LLMs, and particularly Google CCAI, Dialogflow CX, and Vertex AI Gen AI, to identify opportunities for solution enhancement and team knowledge sharing. Pathway to Success Our singular goal is to provide customers the best possible experience in building, modernizing, and maintaining applications in Google Cloud Platform. Your success starts by positively impacting the direction of a dynamic practice with vision and passion. You will be measured quarterly by the breadth, magnitude and quality of your contributions, your ability to estimate accurately, customer feedback at the close of projects, how well you collaborate with your peers, and the consultative polish you bring to customer interactions. As you continue to execute successfully, we will build a customized development plan together that leads you through the engineering or management growth tracks. Required Qualifications: 5+ years of experience in IT, with proven experience contributing to the design, building, and deployment of enterprise-grade AI/ML solutions, including a significant focus on contact center and conversational AI solutions. Strong understanding of AI/ML principles, natural language processing (NLP), machine learning algorithms, and deep learning architectures. Expert-level hands-on experience with Google Cloud Platform (GCP) , particularly services such as: Dialogflow CX (advanced proficiency is a must) Vertex AI (especially generative AI features, custom model deployment, Vertex AI Search) BigQuery Cloud Functions or similar serverless compute Experience with Google CCAI services and ecosystem. Hands-on experience deploying and using 3rd-party LLMs. Familiarity with AI Applications like agent builder, agent space, concepts of datastore, fine tuning (Connectors, Controls, ACLs, etc.). Strong understanding of contact center operations and technologies. Excellent communication, presentation, and interpersonal skills, with the ability to articulate complex technical concepts to diverse audiences (both technical and non-technical). Preferred Qualifications: Familiarity with Agile development methodologies. Industry certifications in relevant technologies (e.g., Google Cloud Professional Machine Learning Engineer). Experience tuning applications for non-functional requirements, i.e., usability, maintainability, scalability, availability, security, portability, etc. Exposure to relational and NoSQL datastores. Experience with API design and development (RESTful, gRPC) and strong familiarity with relevant programming languages for AI/ML development (e.g., Python). Familiarity with frontend web technologies, particularly React or Angular, for building user interfaces that integrate with AI solutions. Experience with other cloud offerings and solutions (e.g., AWS Lex, SageMaker, Lambda, or Azure Bot Service, Machine Learning). Good to have experience in: Agent Assist ML: Data Ingestion, Exploration, Transformation, and Validation ML: Model Development Frameworks ML: Specialized Modeling Areas ML: Evaluation and Monitoring About SADA An Insight company Values: We built our core values on themes that internally compel us to deliver our best to our partners, our customers and to each other. Ensuring a diverse and inclusive workplace where we learn from each other is core to SADA's values. We welcome people of different backgrounds, experiences, abilities, and perspectives. We are an equal opportunity employer. Hunger Heart Harmony Work with the best: SADA has been the largest Google Cloud partner in North America since 2016 and, for the eighth year in a row, has been named a Google Global Partner of the Year. Business Performance: SADA has been named to the INC 5000 Fastest-Growing Private Companies list for 15 years in a row, garnering Honoree status. CRN has also named SADA on the Top 500 Global Solutions Providers list for the past 5 years. The overall culture continues to evolve with engineering at its core: 3200+ projects completed, 4000+ customers served, 10K+ workloads and 30M+ users migrated to the cloud. SADA India is committed to the safety of its employees and recommends that new hires receive a COVID vaccination before beginning work . Show more Show less
Posted 1 month ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: DevOps Engineer Location: Gurugram (On-site) Experience Required: 2–6 years Work Schedule: Monday to Friday, 10:30 AM – 8:00 PM (1st and 3rd Saturdays off) About Darwix AI Darwix AI is a next-generation Generative AI platform built for enterprise revenue teams across sales, support, credit, and retail. Our proprietary AI infrastructure processes multimodal data such as voice calls, emails, chat logs, and CCTV streams to deliver real-time contextual nudges, performance analytics, and AI-assisted coaching. Our product suite includes: Transform+: Real-time conversational intelligence for contact centers and field sales Sherpa.ai: Multilingual GenAI assistant offering live coaching, call summaries, and objection handling Store Intel: A computer vision solution converting retail CCTV feeds into actionable insights Darwix AI is trusted by leading organizations including IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar, and Sobha Realty. We are backed by top institutional investors and are expanding rapidly across India, the Middle East, and Southeast Asia. Key Responsibilities Design, implement, and manage scalable cloud infrastructure using AWS services such as EC2, S3, IAM, Lambda, SageMaker, and EKS Build and maintain secure, automated CI/CD pipelines using GitHub Actions, Docker, and Terraform Manage machine learning model deployment workflows and lifecycle using tools such as MLflow or DVC Deploy and monitor Kubernetes-based workloads in Amazon EKS (both managed and self-managed node groups) Implement best practices for configuration management, containerization, secrets handling, and infrastructure security Ensure system availability, performance monitoring, and failover automation for critical ML services Collaborate with data scientists and software engineers to operationalize model training, inference, and version control Contribute to Agile ceremonies and ensure DevOps alignment with sprint cycles and delivery milestones Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field 2–6 years of experience in DevOps, MLOps, or related roles Proficiency in AWS services including EC2, S3, IAM, Lambda, SageMaker, and EKS Strong understanding of Kubernetes architecture and workload orchestration in EKS environments Hands-on experience with CI/CD pipelines and GitHub Actions, including secure credential management using GitHub Secrets Strong scripting and automation skills (Python, Shell scripting) Familiarity with model versioning tools such as MLflow or DVC, and artifact storage strategies using AWS S3 Solid understanding of Agile software development practices and QA/testing workflows Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA We are seeking an experienced Solution Architect/Business Development Manager with expertise in AI/ML to drive business growth and deliver innovative solutions. The successful candidate will be responsible for assessing client business requirements, designing technical solutions, recommending AI/ML approaches, and collaborating with delivery organizations to implement end-to-end solutions. What You'll Be Doing Key Responsibilities: Business Requirement Analysis: Assess client's business requirements and convert them into technical specifications that meet business outcomes. AI/ML Solution Design: Recommend the right AI/ML approaches to meet business requirements and design solutions that drive business value. Opportunity Sizing: Size the opportunity and develop business cases to secure new projects and grow existing relationships. Solution Delivery: Collaborate with delivery organizations to design end-to-end AI/ML solutions, ensuring timely and within-budget delivery. Costing and Pricing: Develop costing and pricing strategies for AI/ML solutions, ensuring competitiveness and profitability. Client Relationship Management: Build and maintain strong relationships with clients, understanding their business needs and identifying new opportunities. Technical Leadership: Provide technical leadership and guidance to delivery teams, ensuring solutions meet technical and business requirements. Knowledge Sharing: Share knowledge and expertise with the team, contributing to the development of best practices and staying up-to-date with industry trends. Collaboration: Work closely with cross-functional teams, including data science, engineering, and product management, to ensure successful project delivery. Requirements: Education: Master's degree in Computer Science, Engineering, or related field Experience: 10+ years of experience in AI/ML solution architecture, business development, or a related field Technical Skills: Strong technical expertise in AI/ML, including machine learning algorithms, deep learning, and natural language processing. Technical Skills: Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Technical Skills: Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms Hyperscaler: Experience with cloud-based AI/ML platforms and tools (e.g., AWS SageMaker, Azure Machine Learning, Google Cloud AI Platform) Softskill: Excellent business acumen and understanding of business requirements and outcomes Softskill: Strong communication and interpersonal skills, with ability to work with clients and delivery teams Business Acumen: Experience with solution costing and pricing strategies with Strong analytical and problem-solving skills, with ability to think creatively and drive innovation Nice to Have: Experience with Agile development methodologies Knowledge of industry-specific AI/ML applications (e.g., healthcare, finance, retail) Certification in AI/ML or related field (e.g., AWS Certified Machine Learning – Specialty) Location: Delhi or Bangalore Workplace type: Hybrid Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today. Show more Show less
Posted 1 month ago
12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking an experienced Devops/ AIOps Architect to design, architect, and implement an AI-driven operations solution that integrates various cloud-native services across AWS, Azure, and cloud-agnostic environments. The AIOps platform will be used for end-to-end machine learning lifecycle management, automated incident detection, and root cause analysis (RCA). The architect will lead efforts in developing a scalable solution utilizing data lakes, event streaming pipelines, ChatOps integration, and model deployment services. This platform will enable real-time intelligent operations in hybrid cloud and multi-cloud setups. Responsibilities Assist in the implementation and maintenance of cloud infrastructure and services Contribute to the development and deployment of automation tools for cloud operations Participate in monitoring and optimizing cloud resources using AIOps and MLOps techniques Collaborate with cross-functional teams to troubleshoot and resolve cloud infrastructure issues Support the design and implementation of scalable and reliable cloud architectures Conduct research and evaluation of new cloud technologies and tools Work on continuous improvement initiatives to enhance cloud operations efficiency and performance Document cloud infrastructure configurations, processes, and procedures Adhere to security best practices and compliance requirements in cloud operations Requirements Bachelor’s Degree in Computer Science, Engineering, or related field 12+ years of experience in DevOps roles, AIOps, OR Cloud Architecture Hands-on experience with AWS services such as SageMaker, S3, Glue, Kinesis, ECS, EKS Strong experience with Azure services such as Azure Machine Learning, Blob Storage, Azure Event Hubs, Azure AKS Strong experience with Infrastructure as Code (IAC)/ Terraform/ Cloud formation Proficiency in container orchestration (e.g., Kubernetes) and experience with multi-cloud environments Experience with machine learning model training, deployment, and data management across cloud-native and cloud-agnostic environments Expertise in implementing ChatOps solutions using platforms like Microsoft Teams, Slack, and integrating them with AIOps automation Familiarity with data lake architectures, data pipelines, and inference pipelines using event-driven architectures Strong programming skills in Python for rule management, automation, and integration with cloud services Nice to have Any certifications in the AI/ ML/ Gen AI space Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Delhi Cantonment, Delhi, India
On-site
Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA We are seeking an experienced Solution Architect/Business Development Manager with expertise in AI/ML to drive business growth and deliver innovative solutions. The successful candidate will be responsible for assessing client business requirements, designing technical solutions, recommending AI/ML approaches, and collaborating with delivery organizations to implement end-to-end solutions. What You'll Be Doing Key Responsibilities: Business Requirement Analysis: Assess client's business requirements and convert them into technical specifications that meet business outcomes. AI/ML Solution Design: Recommend the right AI/ML approaches to meet business requirements and design solutions that drive business value. Opportunity Sizing: Size the opportunity and develop business cases to secure new projects and grow existing relationships. Solution Delivery: Collaborate with delivery organizations to design end-to-end AI/ML solutions, ensuring timely and within-budget delivery. Costing and Pricing: Develop costing and pricing strategies for AI/ML solutions, ensuring competitiveness and profitability. Client Relationship Management: Build and maintain strong relationships with clients, understanding their business needs and identifying new opportunities. Technical Leadership: Provide technical leadership and guidance to delivery teams, ensuring solutions meet technical and business requirements. Knowledge Sharing: Share knowledge and expertise with the team, contributing to the development of best practices and staying up-to-date with industry trends. Collaboration: Work closely with cross-functional teams, including data science, engineering, and product management, to ensure successful project delivery. Requirements: Education: Master's degree in Computer Science, Engineering, or related field Experience: 10+ years of experience in AI/ML solution architecture, business development, or a related field Technical Skills: Strong technical expertise in AI/ML, including machine learning algorithms, deep learning, and natural language processing. Technical Skills: Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Technical Skills: Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms Hyperscaler: Experience with cloud-based AI/ML platforms and tools (e.g., AWS SageMaker, Azure Machine Learning, Google Cloud AI Platform) Softskill: Excellent business acumen and understanding of business requirements and outcomes Softskill: Strong communication and interpersonal skills, with ability to work with clients and delivery teams Business Acumen: Experience with solution costing and pricing strategies with Strong analytical and problem-solving skills, with ability to think creatively and drive innovation Nice to Have: Experience with Agile development methodologies Knowledge of industry-specific AI/ML applications (e.g., healthcare, finance, retail) Certification in AI/ML or related field (e.g., AWS Certified Machine Learning – Specialty) Location: Delhi or Bangalore Workplace type: Hybrid Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today. Show more Show less
Posted 1 month ago
6.0 years
60 - 65 Lacs
Greater Bhopal Area
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MAM, App integration Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France