Home
Jobs
Companies
Resume

685 Vertex Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Position: Solution Architect Location: Chennai/ Bangalore/ Kuala Lumpur Experience: 8+ years Employment Type: Full-time Job Overview Join Moving Walls, a trailblazer in the Out-of-Home (OOH) advertising and AdTech ecosystem, as a Solution Architect. This pivotal role places you at the heart of our innovative journey, designing and implementing scalable, efficient, and transformative solutions for our award-winning platforms like LMX and MAX . With a focus on automating and enhancing media transactions, you’ll enable a seamless connection between media buyers and sellers in a rapidly evolving digital-first landscape. As a Solution Architect, you will bridge the gap between business objectives and technical execution, working in an Agile environment with POD-based execution models to ensure ownership and accountability. You will drive initiatives that revolutionize the way data and technology shape OOH advertising. Why Join Us? ● Innovative Vision: Be part of a team committed to "Creating the Future of Outernet Media", where every solution impacts global markets across Asia, ANZ, Africa, and more. ● Cutting-edge Projects: Work on features like programmatic deal automation, data-driven audience insights, and dynamic campaign management for platforms connecting billions of ad impressions. ● Collaborative Culture: Collaborate with multidisciplinary teams, including Sales, Product Management, and Engineering, to craft solutions that are customized and impactful. What You’ll Do: ● Architect scalable and innovative solutions for AdTech products, ensuring alignment with organizational goals and market needs. ● Collaborate with cross-functional teams to gather, analyze, and translate business requirements into technical designs. ● Lead the development of programmatic solutions, dynamic audience segmentation tools, and integrations for global markets. ● Enhance existing products by integrating advanced features like dynamic rate cards, bid management, and inventory mapping. ● Advocate for best practices in system design, ensuring the highest standards of security, reliability, and performance. What You Bring: ● A strong technical background with hands-on experience in cloud-based architectures, API integrations, and data analytics. ● Proven expertise in working within an Agile environment and leading POD-based teams to deliver high-impact results. ● Passion for AdTech innovation and the ability to navigate complex, fast-paced environments. ● Excellent problem-solving skills, creativity, and a customer-centric mindset. Key Responsibilities 1. Solution Design: ○ Develop end-to-end solution architectures for web, mobile, and cloud-based platforms using the specified tech stack. ○ Translate business requirements into scalable and reliable technical solutions. 2. Agile POD-Based Execution: ○ Collaborate with cross-functional POD teams (Product, Engineering, QA, and Operations) to deliver iterative and focused solutions. ○ Ensure clear ownership of deliverables within the POD, fostering accountability and streamlined execution. ○ Contribute to defining and refining the POD stages to ensure alignment with organizational goals. 3. Collaboration and Stakeholder Management: ○ Work closely with product, engineering, and business teams to define technical requirements. ○ Lead technical discussions with internal and external stakeholders. 4. Technical Expertise: ○ Provide architectural guidance and best practices for system integrations, APIs, and microservices. ○ Ensure solutions meet non-functional requirements like scalability, reliability, and security. 5. Documentation: ○ Prepare and maintain architectural documentation, including solution blueprints and workflows. ○ Create technical roadmaps and detailed design documentation. 6. Mentorship: ○ Guide and mentor engineering teams during development and deployment phases. ○ Review code and provide technical insights to improve quality and performance. 7. Innovation and Optimization: ○ Identify areas for technical improvement and drive innovation in solutions. ○ Evaluate emerging technologies to recommend the best tools and frameworks. Required Skills and Qualifications ● Bachelor’s/Master’s degree in Computer Science, Information Technology, or a related field. ● Proven experience as a Solution Architect or a similar role. ● Expertise in programming languages and frameworks: Java, Angular, Python, C++ ● Proficiency in AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, or Keras. ● Experience in deploying AI models in production, including optimizing for performance and scalability. ● Understanding of deep learning, NLP, computer vision, or generative AI techniques. ● Hands-on experience with model fine-tuning, transfer learning, and hyperparameter optimization. ● Strong knowledge of enterprise architecture frameworks (TOGAF, Zachman, etc.). ● Expertise in distributed systems, microservices, and cloud-native architectures. ● Experience in API design, data pipelines, and integration of AI services within existing systems. ● Strong knowledge of databases: MongoDB, SQL, NoSQL. ● Proficiency in working with large-scale datasets, data wrangling, and ETL pipelines. ● Hands-on experience with CI/CD pipelines for AI development. ● Version control systems like Git and experience with ML lifecycle tools such as MLflow or DVC. ● Proven track record of leading AI-driven projects from ideation to deployment. ● Hands-on experience with cloud platforms (AWS, Azure, GCP) for deploying AI solutions. ● Familiarity with Agile methodologies, especially POD-based execution models. ● Strong problem-solving skills and ability to design scalable solutions. ● Excellent communication skills to articulate technical solutions to stakeholders. Preferred Qualifications ● Experience in e-commerce, Adtech or OOH (Out-of-Home) advertising technology. ● Knowledge of tools like Jira, Confluence, and Agile frameworks like Scrum or Kanban. ● Certification in cloud technologies (e.g., AWS Solutions Architect). Tech Stack ● Programming Languages: Java, Python or C++ ● Frontend Framework: Angular ● Database Technologies: MongoDB, SQL, NoSQL ● Cloud Platform: AWS ● Familiarity with data processing tools like Pandas, NumPy, and big data frameworks (e.g., Hadoop, Spark). ● Experience with cloud platforms for AI (AWS SageMaker, Azure ML, Google Vertex AI). ● Understanding of APIs, microservices, and containerization tools like Docker and Kubernetes. Share your profile to kushpu@movingwalls.com Show more Show less

Posted 11 hours ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

About Us Evangelist Apps is a UK-based custom software development company specializing in full-stack web and mobile app development, CRM/ERP solutions, workflow automation, and AI-powered platforms. Trusted by global brands like British Airways, Third Bridge, Hästens Beds, and Duxiana, we help clients solve complex business problems with technology. We’re now expanding into AI-driven services and are looking for our first Junior AI Developer to join the team. This is an exciting opportunity to help lay the groundwork for our AI capabilities. Role Overview As our first Junior AI Developer, you’ll work closely with our senior engineers and product teams to research, prototype, and implement AI-powered features across client solutions. You’ll contribute to machine learning models, LLM integrations, and intelligent automation systems that enhance user experiences and internal workflows. Key Responsibilities Assist in building and fine-tuning ML models for tasks like classification, clustering, or NLP Integrate AI services (e.g., OpenAI, Hugging Face, AWS, or Vertex AI) into applications Develop proof-of-concept projects and deploy lightweight models into production Preprocess datasets, annotate data, and evaluate model performance Collaborate with product, frontend, and backend teams to deliver end-to-end solutions Keep up to date with new trends in machine learning and generative AI Must-Have Skills Solid understanding of Python and popular AI/ML libraries (e.g., scikit-learn, pandas, TensorFlow, or PyTorch) Familiarity with foundational ML concepts (e.g., supervised/unsupervised learning, overfitting, model evaluation) Experience with REST APIs and working with JSON-based data Exposure to LLMs or prompt engineering is a plus Strong problem-solving attitude and eagerness to learn Good communication and documentation skills Nice-to-Haves (Good to Learn On the Job) Experience with cloud-based ML tools (AWS Sagemaker, Google Vertex AI, or Azure ML) Basic knowledge of MLOps and deployment practices Prior internship or personal projects involving AI or automation Contributions to open-source or Kaggle competitions What We Offer Mentorship from experienced engineers and a high-learning environment Opportunity to work on real-world client projects from day one Exposure to multiple industry domains including expert networks, fintech, healthtech, and e-commerce Flexible working hours and remote-friendly culture Rapid growth potential as our AI practice scales Show more Show less

Posted 11 hours ago

Apply

3.0 - 7.0 years

7 - 16 Lacs

Hyderābād

On-site

AI Specialist / Machine Learning Engineer Location: On-site (hyderabad) Department: Data Science & AI Innovation Experience Level: Mid–Senior Reports To: Director of AI / CTO Employment Type: Full-time Job Summary We are seeking a skilled and forward-thinking AI Specialist to join our advanced technology team. In this role, you will lead the design, development, and deployment of cutting-edge AI/ML solutions, including large language models (LLMs), multimodal systems, and generative AI. You will collaborate with cross-functional teams to develop intelligent systems, automate complex workflows, and unlock insights from data at scale. Key Responsibilities Design and implement machine learning models for natural language processing (NLP), computer vision, predictive analytics, and generative AI. Fine-tune and deploy LLMs using frameworks such as Hugging Face Transformers, OpenAI APIs, and Anthropic Claude. Develop Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, and vector databases (e.g., Pinecone, Weaviate, Qdrant). Productionize ML workflows using MLflow, TensorFlow Extended (TFX), or AWS SageMaker Pipelines. Integrate generative AI with business applications, including Copilot-style features, chat interfaces, and workflow automation. Collaborate with data scientists, software engineers, and product managers to build and scale AI-powered products. Monitor, evaluate, and optimize model performance, focusing on fairness, explainability (e.g., SHAP, LIME), and data/model drift. Stay informed on cutting-edge AI research (e.g., NeurIPS, ICLR, arXiv) and evaluate its applicability to business challenges. Tools & Technologies Languages & Frameworks Python, PyTorch, TensorFlow, JAX FastAPI, LangChain, LlamaIndex ML & AI Platforms OpenAI (GPT-4/4o), Anthropic Claude, Mistral, Cohere Hugging Face Hub & Transformers Google Vertex AI, AWS SageMaker, Azure ML Data & Deployment MLflow, DVC, Apache Airflow, Ray Docker, Kubernetes, RESTful APIs, GraphQL Snowflake, BigQuery, Delta Lake Vector Databases & RAG Tools Pinecone, Weaviate, Qdrant, FAISS ChromaDB, Milvus Generative & Multimodal AI DALL·E, Sora, Midjourney, Runway Whisper, CLIP, SAM (Segment Anything Model) Qualifications Bachelor’s or Master’s in Computer Science, AI, Data Science, or related discipline 3–7 years of experience in machine learning or applied AI Hands-on experience deploying ML models to production environments Familiarity with LLM prompt engineering and fine-tuning Strong analytical thinking, problem-solving ability, and communication skills Preferred Qualifications Contributions to open-source AI projects or academic publications Experience with multi-agent frameworks (e.g., AutoGPT, OpenDevin) Knowledge of synthetic data generation and augmentation techniques Job Type: Permanent Pay: ₹734,802.74 - ₹1,663,085.14 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person

Posted 11 hours ago

Apply

0 years

0 - 0 Lacs

India

On-site

Primary responsibilities: Parent Relationship Management: Ensure all parents are aptly welcomed and comfortably seated. Effectively address/resolve parents’ enquiries across mediums i.e. in-person, over the phone, email, company website etc. Escalate all unresolved grievances of the parents to the Principal and Marketing Team at Vertex for prompt resolution ending with a parent delight Adroitly track all parents’ queries via organizational query traction mechanism like CRM etc. Generate parents’ delight by ensuring high responsiveness and closing the loop with parents on all issues and keep them updated/engaged during the process of resolution. Efficiently guide the parents on school systems and processes and ensure that the repository of updated information is always available. Ensure an ambient and parent-friendly environment in the front office area with assistance from the admin department. Facilitate the information of all elements pertaining to a child’s life cycle in the school as well as post school activities, summer camps etc. Sales and Marketing: Pre-sales: Efficiently manage the pre-sales process like; keeping a track on all leads whether from web, telephone, walk-ins etc and participate in planning of activities like society camps or mall activities, pre-school tie ups, corporate tie ups, RWA and parent engagement activities. It should handle the entire sales process effectively for potential parents from first interface to closure, thus positively augmenting the conversions from walk-in to admissions. Contact potential parents, discuss their requirements, and present the VIBGYOR brand in order to commensurate the parent needs Should be an active team member in achieving the annual admission targets and objectives in line with the Organization Admission Target Plan Job Type: Full-time Pay: ₹35,000.00 - ₹50,000.00 per month Benefits: Health insurance Leave encashment Provident Fund Schedule: Day shift Work Location: In person Speak with the employer +91 8591889918

Posted 11 hours ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Global Data Insight & Analytics organization is looking for a top-notch Software Engineer who has also got Machine Learning knowledge & Experience to add to our team to drive the next generation of AI/ML (Mach1ML) platform. In this role you will work in a small, cross-functional team. The position will collaborate directly and continuously with other engineers, business partners, product managers and designers from distributed locations, and will release early and often. The team you will be working on is focused on building Mach1ML platform – an AI/ML enablement platform to democratize Machine Learning across Ford enterprise (like OpenAI’s GPT, Facebook’s FBLearner, etc.) to deliver next-gen analytics innovation. We strongly believe that data has the power to help create great products and experiences which delight our customers. We believe that actionable and persistent insights, based on high quality data platform, help business and engineering make more impactful decisions. Our ambitions reach well beyond existing solutions, and we are in search of innovative individuals to join this Agile team. This is an exciting, fast-paced role which requires outstanding technical and organization skills combined with critical thinking, problem-solving and agile management tools to support team success. Responsibilities What you'll be able to do: As a Software Engineer, you will work on developing features for Mach1ML platform, support customers in model deployment using Mach1ML platform on GCP and On-prem. You will follow Rally to manage your work. You will incorporate an understanding of product functionality and customer perspective for model deployment. You will work on the cutting-edge technologies such as GCP, Kubernetes, Docker, Seldon, Tekton, Airflow, Rally, etc. Position Responsibilities: Work closely with Tech Anchor, Product Manager and Product Owner to deliver machine learning use cases using Ford Agile Framework. Work with Data Scientists and ML engineers to tackle challenging AI problems. Work specifically on the Deploy team to drive model deployment and AI/ML adoption with other internal and external systems. Help innovate by researching state-of-the-art deployment tools and share knowledge with the team. Lead by example in use of Paired Programming for cross training/upskilling, problem solving, and speed to delivery. Leverage latest GCP, CICD, ML technologies Critical Thinking: Able to influence the strategic direction of the company by finding opportunities in large, rich data sets and crafting and implementing data driven strategies that fuel growth including cost savings, revenue, and profit. Modelling: Assessments, and evaluating impacts of missing/unusable data, design and select features, develop, and implement statistical/predictive models using advanced algorithms on diverse sources of data and testing and validation of models, such as forecasting, natural language processing, pattern recognition, machine vision, supervised and unsupervised classification, decision trees, neural networks, etc. Analytics: Leverage rigorous analytical and statistical techniques to identify trends and relationships between different components of data, draw appropriate conclusions and translate analytical findings and recommendations into business strategies or engineering decisions - with statistical confidence Data Engineering: Experience with crafting ETL processes to source and link data in preparation for Model/Algorithm development. This includes domain expertise of data sets in the environment, third-party data evaluations, data quality Visualization: Build visualizations to connect disparate data, find patterns and tell engaging stories. This includes both scientific visualization as well as geographic using applications such as Seaborn, Qlik Sense/PowerBI/Tableau/Looker Studio, etc. Qualifications Minimum Requirements we seek: Bachelor’s or master’s degree in computer science engineering or related field or a combination of education and equivalent experience. 3+ years of experience in full stack software development 3+ years’ experience in Cloud technologies & services, preferably GCP 3+ years of experience of practicing statistical methods and their accurate application e.g. ANOVA, principal component analysis, correspondence analysis, k-means clustering, factor analysis, multi-variate analysis, Neural Networks, causal inference, Gaussian regression, etc. 3+ years’ experience with Python, SQL, BQ. Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, Streamlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Our Preferred Requirements: Master’s degree in computer science engineering, or related field or a combination of education and equivalent experience. Demonstrated successful application of analytical methods and machine learning techniques with measurable impact on product/design/business/strategy. Proficiency in programming languages such as Python with a strong emphasis on machine learning libraries, generative AI frameworks, and monitoring tools. Utilize tools and technologies such as TensorFlow, PyTorch, scikit-learn, and other machine learning libraries to build and deploy machine learning solutions on cloud platforms. Design and implement cloud infrastructure using technologies such as Kubernetes, Terraform, and Tekton to support scalable and reliable deployment of machine learning models, generative AI models, and applications. Integrate machine learning and generative AI models into production systems on cloud platforms such as Google Cloud Platform (GCP) and ensure scalability, performance, and proactive monitoring. Implement monitoring solutions to track the performance, health, and security of systems and applications, utilizing tools such as Prometheus, Grafana, and other relevant monitoring tools. Conduct code reviews and provide constructive feedback to team members on machine learning-related projects. Knowledge and experience in agentic workflow based application development and DevOps Stay up to date with the latest trends and advancements in machine learning and data science. Show more Show less

Posted 11 hours ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Dear Job Seekers, Greetings from Voice Bay! We are currently hiring for Machine Learning Engineer , If you are interested, please submit your application. Please find below the JD for your consideration: Work Location – Hyderabad Exp – 4 – 10 Years Work Mode – 5 Days Work From Office Mandatory Key Responsibilities  Design, develop, and implement end-to-end machine learning models, from initial data exploration and feature engineering to model deployment and monitoring in production environments.  Build and optimize data pipelines for both structured and unstructured datasets, focusing on advanced data blending, transformation, and cleansing techniques to ensure data quality and readiness for modeling.  Create, manage, and query complex databases, leveraging various data storage solutions to efficiently extract, transform, and load data for machine learning workflows.  Collaborate closely with data scientists, software engineers, and product managers to translate business requirements into effective, scalable, and maintainable ML solutions.  Implement and maintain robust MLOps practices, including version control, model monitoring, logging, and performance evaluation to ensure model reliability and drive continuous improvement.  Research and experiment with new machine learning techniques, tools, and technologies to enhance our predictive capabilities and operational efficiency. Required Skills & Experience  5+ years of hands-on experience in building, training, and deploying machine learning models in a professional, production-oriented setting.  Demonstrable experience with database creation and advanced querying (e.g., SQL, NoSQL), with a strong understanding of data warehousing concepts.  Proven expertise in data blending, transformation, and feature engineering, adept at integrating and harmonizing both structured (e.g., relational databases, CSVs) and unstructured (e.g., text, logs, images) data.  Strong practical experience with cloud platforms for machine learning development and deployment; significant experience with Google Cloud Platform (GCP) services (e.g., Vertex AI, BigQuery, Dataflow) is highly desirable.  Proficiency in programming languages commonly used in data science (e.g., Python is preferred, R).  Solid understanding of various machine learning algorithms (e.g., regression, classification, clustering, dimensionality reduction) and experience with advanced techniques like Deep Learning, Natural Language Processing (NLP), or Computer Vision.  Experience with machine learning libraries and frameworks (e.g., scikit-learn, TensorFlow, PyTorch).  Familiarity with MLOps tools and practices, including model versioning, monitoring, A/B testing, and continuous integration/continuous deployment (CI/CD) pipelines.  Experience with containerization technologies like Docker and orchestration tools like Kubernetes for deploying ML models as REST APIs.  Proficiency with version control systems (e.g., Git, GitHub/GitLab) for collaborative development. Educational Background  Bachelor's or Master's degree in Statistics, Mathematics, Computer Science, Engineering, Data Science, or a closely related quantitative field.  Alternatively, a significant certification in Data Science, Machine Learning, or Cloud AI combined with relevant practical experience will be considered.  A compelling combination of relevant education and professional experience will also be valued. Interested Candidates can share their Resume to the below mentioned Email I.D tarunrai@voicebaysolutions.in hr@voicebaysolutions.in Show more Show less

Posted 11 hours ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

This posting is for one of our International Clients. About the Role We’re creating a new certification: Inside Gemini: Gen AI Multimodal and Google Intelligence (Google DeepMind) . This course is designed for technical learners who want to understand and apply the capabilities of Google’s Gemini models and DeepMind technologies to build powerful, multimodal AI applications. We’re looking for a Subject Matter Expert (SME) who can help shape this course from the ground up. You’ll work closely with a team of learning experience designers, writers, and other collaborators to ensure the course is technically accurate, industry-relevant, and instructionally sound. Responsibilities As the SME, you’ll partner with learning experience designers and content developers to: Translate real-world Gemini and DeepMind applications into accessible, hands-on learning for technical professionals. Guide the creation of labs and projects that allow learners to build pipelines for image-text fusion, deploy Gemini APIs, and experiment with DeepMind’s reinforcement learning libraries. Contribute technical depth across activities, from high-level course structure down to example code, diagrams, voiceover scripts, and data pipelines. Ensure all content reflects current, accurate usage of Google’s multimodal tools and services. Be available during U.S. business hours to support project milestones, reviews, and content feedback. This role is an excellent fit for professionals with deep experience in AI/ML, Google Cloud, and a strong familiarity with multimodal systems and the DeepMind ecosystem. Essential Tools & Platforms A successful SME in this role will demonstrate fluency and hands-on experience with the following: Google Cloud Platform (GCP) Vertex AI (particularly Gemini integration, model tuning, and multimodal deployment) Cloud Functions, Cloud Run (for inference endpoints) BigQuery and Cloud Storage (for handling large image-text datasets) AI Platform Notebooks or Colab Pro Google DeepMind Technologies JAX and Haiku (for neural network modeling and research-grade experimentation) DeepMind Control Suite or DeepMind Lab (for reinforcement learning demonstrations) RLax or TF-Agents (for building and modifying RL pipelines) AI/ML & Multimodal Tooling Gemini APIs and SDKs (image-text fusion, prompt engineering, output formatting) TensorFlow 2.x and PyTorch (for model interoperability) Label Studio, Cloud Vision API (for annotation and image-text preprocessing) Data Science & MLOps DVC or MLflow (for dataset and model versioning) Apache Beam or Dataflow (for processing multimodal input streams) TensorBoard or Weights & Biases (for visualization) Content Authoring & Collaboration GitHub or Cloud Source Repositories Google Docs, Sheets, Slides Screen recording tools like Loom or OBS Studio Required skills and experience: Demonstrated hands-on experience building, deploying, and maintaining sophisticated AI powered applications using Gemini APIs/SDKs within the Google Cloud ecosystem, especially in Firebase Studio and VS Code. Proficiency in designing and implementing agent-like application patterns, including multi-turn conversational flows, state management, and complex prompting strategies (e.g., Chain-of Thought, few-shot, zero-shot). Experience integrating Gemini with Google Cloud services (Firestore, Cloud Functions, App Hosting) and external APIs for robust, production-ready solutions. Proven ability to engineer applications that process, integrate, and generate content across multiple modalities (text, images, audio, video, code) using Gemini’s native multimodal capabilities. Skilled in building and orchestrating pipelines for multimodal data handling, synchronization, and complex interaction patterns within application logic. Experience designing and implementing production-grade RAG systems, including integration with vector databases (e.g., Pinecone, ChromaDB) and engineering data pipelines for indexing and retrieval. Ability to manage agent state, memory, and persistence for multi-turn and long-running interactions. Proficiency leveraging AI-assisted coding features in Firebase Studio (chat, inline code, command execution) and using App Prototyping agents or frameworks like Genkit for rapid prototyping and structuring agentic logic. Strong command of modern development workflows, including Git/GitHub, code reviews, and collaborative development practices. Experience designing scalable, fault-tolerant deployment architectures for multimodal and agentic AI applications using Firebase App Hosting, Cloud Run, or similar serverless/cloud platforms. Advanced MLOps skills, including monitoring, logging, alerting, and versioning for generative AI systems and agents. Deep understanding of security best practices: prompt injection mitigation (across modalities), secure API key management, authentication/authorization, and data privacy. Demonstrated ability to engineer for responsible AI, including bias detection, fairness, transparency, and implementation of safety mechanisms in agentic and multimodal applications. Experience addressing ethical challenges in the deployment and operation of advanced AI systems. Proven success designing, reviewing, and delivering advanced, project-based curriculum and hands-on labs for experienced software developers and engineers. Ability to translate complex engineering concepts (RAG, multimodal integration, agentic patterns, MLOps, security, responsible AI) into clear, actionable learning materials and real world projects. 5+ years of professional experience in AI-powered application development, with a focus on generative and multimodal AI. Strong programming skills in Python and JavaScript/TypeScript; experience with modern frameworks and cloud-native development. Bachelor’s or Master’s degree in Computer Science, Data Engineering, AI, or a related technical field. Ability to explain advanced technical concepts (e.g., fusion transformers, multimodal embeddings, RAG workflows) to learners in an accessible way. Strong programming experience in Python and experience deploying machine learning pipelines Ability to work independently, take ownership of deliverables, and collaborate closely with designers and project managers Preferred: Experience with Google DeepMind tools (JAX, Haiku, RLax, DeepMind Control Suite/Lab) and reinforcement learning pipelines. Familiarity with open data formats (Delta, Parquet, Iceberg) and scalable data engineering practices. Prior contributions to open-source AI projects or technical community engagement. Show more Show less

Posted 11 hours ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. SAP MM Job Description: Position: SAP Senior MM Consultant Required Qualifications: Bachelor’s degree (or equivalent experience) Preferably Engineering Minimum two e2e Implementation Project along with experience in Support / Roll out / Upgrade Projects 6 to 9 Yrs. of Relevant experience Professional Mandatory Requirements: Strong knowledge of Business Processes Implementation Methodology Consumables Procurement Process Imports Procurement Source determination Demand Flow STO Automatic A/C Determination Automatic PO Conversion Pricing Procedure Output Determination Batch Management Sub-Contracting Third Party Sub-Contracting A/C Entries for the Document posting Serialization Consignment Pipeline Invoice planning Automatic PO Procedures Evaluated receipt Settlement EDI associated to Order/Delivery/Confirmation/Invoice/Material Master Data Migration with LSMW/BDC Added Advantage: Domain Experience will be added advantage. Worked with taxation components like Vertex will be added advantage. Knowledge on ABAP debugging. SAP MM Certification will be added advantage. Knowledge on Integration Modules like WM / QM / PP / SD will be an added advantage. Roles/Responsibilities: Strong configuration hands on experience in Material Management. Integration with WM / QM / PP / SD modules and with external applications. Responsible for planning and executing SAP Implementation / Development / Support activities regard to SAP – Material Management and ability to Lead the team. Understand client requirements, provide solutions, functional specifications and configure the system accordingly Ability to create presentation/workshop material for Blueprint that need to be conveyed and be able to present them to the client. Ability to create Process Flows in Microsoft Visios for the clients proposed business processes. Ability to create Process Definition Document / Design Document (PDD) and Business Process Procedure (BPP) for the solutions provided. Ability to configure SAP MM and deliver work products / packages conforming to the Client's Standards & Requirements. General: Should have good written & communication skills. Should able to handle the client individually. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 11 hours ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. SAP MM Job Description: Position: SAP Senior MM Consultant Required Qualifications: Bachelor’s degree (or equivalent experience) Preferably Engineering Minimum two e2e Implementation Project along with experience in Support / Roll out / Upgrade Projects 6 to 9 Yrs. of Relevant experience Professional Mandatory Requirements: Strong knowledge of Business Processes Implementation Methodology Consumables Procurement Process Imports Procurement Source determination Demand Flow STO Automatic A/C Determination Automatic PO Conversion Pricing Procedure Output Determination Batch Management Sub-Contracting Third Party Sub-Contracting A/C Entries for the Document posting Serialization Consignment Pipeline Invoice planning Automatic PO Procedures Evaluated receipt Settlement EDI associated to Order/Delivery/Confirmation/Invoice/Material Master Data Migration with LSMW/BDC Added Advantage: Domain Experience will be added advantage. Worked with taxation components like Vertex will be added advantage. Knowledge on ABAP debugging. SAP MM Certification will be added advantage. Knowledge on Integration Modules like WM / QM / PP / SD will be an added advantage. Roles/Responsibilities: Strong configuration hands on experience in Material Management. Integration with WM / QM / PP / SD modules and with external applications. Responsible for planning and executing SAP Implementation / Development / Support activities regard to SAP – Material Management and ability to Lead the team. Understand client requirements, provide solutions, functional specifications and configure the system accordingly Ability to create presentation/workshop material for Blueprint that need to be conveyed and be able to present them to the client. Ability to create Process Flows in Microsoft Visios for the clients proposed business processes. Ability to create Process Definition Document / Design Document (PDD) and Business Process Procedure (BPP) for the solutions provided. Ability to configure SAP MM and deliver work products / packages conforming to the Client's Standards & Requirements. General: Should have good written & communication skills. Should able to handle the client individually. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 11 hours ago

Apply

3.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Role - Java Developer Experience - 3-5 yrs Location - Bangalore Backend ● Bachelors/Masters in Computer science from a reputed institute/university ● 3-7 years of strong experience in building Java/golang/python based server side solutions ● Strong in data structure, algorithm and software design ● Experience in designing and building RESTful micro services ● Experience with Server side frameworks such as JPA (HIbernate/SpringData), Spring, vertex, Springboot, Redis, Kafka, Lucene/Solr/ElasticSearch etc. ● Experience in data modeling and design, database query tuning ● Experience in MySQL and strong understanding of relational databases. ● Comfortable with agile, iterative development practices ● Excellent communication (verbal & written), interpersonal and leadership skills ● Previous experience as part of a Start-up or a Product company. ● Experience with AWS technologies would be a plus ● Experience with reactive programming frameworks would be a plus · Contributions to opensource are a plus ● Familiarity with deployment architecture principles and prior experience with container orchestration platforms, particularly Kubernetes, would be a significant advantage Show more Show less

Posted 12 hours ago

Apply

2.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Opportunity – Sales Development Representative (Outbound) | Gurugram & Bangalore About Spyn eAt Spyne, we are transforming how cars are marketed and sold with cutting-edge Generative AI. What started as a bold idea—using AI-powered visuals to help auto dealers sell faster online—has now evolved into a full-fledged, AI-first automotive retail ecosystem . Backed by $16M in Series A funding from Accel, Vertex Ventures, and other top investors, we’re scaling at breakneck spee d:Launched industry-first AI-powered Image, Video & 360° solutions for Automotive deale rsLaunching Gen AI powered Automotive Retail Suite to power Inventory, Marketing, CRM for deale rsOnboarded 1500+ dealers across US, EU and other key markets in the past 2 years of laun chGearing up to onboard 10K+ dealers across global market of 200K+ deale rs150+ members team with near equal split on R&D and G TM Learn more about our produc ts:Spyne AI Products - Studi oA I, Retai lAISeries A Announcement - CNBC-T V1 8, Yourst ory We’re coming to Bangalore – the heart of India’s B2B SaaS ecosys tem!We’re building a high-impact team in Bangalore to be part of our next growth chapter. This is a chance to join a breakout SaaS company at the frontline of innovation, right from India’s fastest-growing tech hub. What are we looking for? We’re lookin g for energetic and drive n SDRs to fue l our outbound engine for the US market. If you love prospecting, thrive on high-quality conversations, and want to make a mark in a hyper-growth AI SaaS startup—this is your ca lli ng. 📍 Lo cation: Bangalore (Work from Office, 5 days a w eek)🌎 Shift T imings: US Shift (6 PM – 3 AM IST) 🚀 Why th is role?Be part of the GTM team expanding into the US—a key growt h marketOwn the top-of-the-funnel motion and help shape our outreach strategyBe among the first hires in Bangalore as we set up in the SaaS capital of India 📌 What wil l you do?Conduct outbound outreach via LinkedIn, email, and phoneIdentify and qualify decision-makers at car dealerships and auto retailersGenerate qualified leads and book meetings for the S ales teamPersonalize outreach to maximize e ngagementCollaborate with AEs for smooth hand-offsMaintain CRM hygiene and track key metrics (connects, conversio ns, etc.) 🏆 What will make you successful in this role?Prior experience in outbound lead generation or in side salesStrong communication (written + verbal)High energy, self-starte r attitudeFamiliarity with tools like LinkedIn Sales Nav, HubSpot/Salesfor ce, ApolloComfortable worki ng in the US time zone 📊 What will a typical quarter at Spyne look like?Book qualified meetings that convert to revenueExecute creative outbound campaigns tail ored for USCollaborate closely with Marketin g and SalesExceed KPIs and gear up for AE or leade rship roles 🔹 How will we set you up for success?Hands-on onboarding and enablement on AI-powered S aaS productsTarget ICP briefs, pitch reviews, and objection handl ing sessions1:1 coaching and mentorship from S ales leaders 🎯 What y ou must have:1–3 years of experience in outbound SDR or BDR roles (pre ferably SaaS)Exposure to global clients, especially US-based, is a strong plusA results-first, hustle-hard mindset with eage rness to grow 🚀 Why Spyne?S trong Culture: High-ownership, zero-politi cs, fast-pac edFast Growth: $5M to $20M ARR trajec tory in motio nUpskill Fast: Learn from top GTM leaders, founders , and adviso rsCareer Path: Clear track to AE or Sales ManagerCom petitive Comp: Base + performance incentives + gro wth bonuses 📢 If you want to thrive in a high-energy sales role, help us take GenAI to the global auto industry, and be part of Spyne’s Bangalore chapter—this is an opportunity you don’ t want to miss. Show more Show less

Posted 14 hours ago

Apply

4.0 - 13.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

TCS is conducting Face to Face interview on 21st June in TCS Pune Office for Oracle Fusion Finance Consultant Job Role: Oracle Fusion Finance Consultant Job Experience: 4 to 13 Years Interview Location: TCS Pune Office Interview Mode: Face to Face Interview Interview Date: 21st June , 2024 Venue: Tata Consultancy Services Ltd, Sahyadri Park 1, Auditorium Plot No. 2 & 3, Phase 3, Rajiv Gandhi Infotech Park, Maan, Hinjawadi, Pune, Maharashtra 411057 What we are looking for: Should have good working knowledge on Fusion Tax, Vertex and GL Requires a background in Oracle financials, particularly in the development and implementation of tax configuration to support a solution which spans the companies and supporting commercial operations. This will involve understanding both the Oracle cloud solution and the consequences to business processes of the adopting out of the box solutions where possible to ensure the correct recording and reporting of tax. Implementation experience using Vertex is must Good Knowledge on Payables (AP),General Ledger (GL), Fixed Assets (FA) Knowledge in Enterprise Structure Knowledge of Tax in different countries including Latam Experience in implementation projects. Working with large and diverse teams in multiple locations Qualification: 15 years of full time education. Show more Show less

Posted 15 hours ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are looking for a highly skilled and motivated Data Scientist with deep experience in building recommendation systems to join our team. This role demands expertise in deep learning, embedding-based retrieval, and the Google Cloud Platform (GCP). You will play a critical role in developing intelligent systems that enhance user experiences through personalized content discovery. Key Responsibilities: Develop, train, and deploy recommendation models using two-tower, multi-tower, and cross-encoder architectures . Generate and utilize text/image embeddings (e.g., CLIP , BERT , Sentence Transformers ) for content-based recommendations. Design semantic similarity search pipelines using vector databases (FAISS, ScaNN, Qdrant, Matching Engine). Create and manage scalable ML pipelines using Vertex AI , Kubeflow Pipelines , and GKE . Handle large-scale data preparation and feature engineering using Dataproc (PySpark) and Dataflow . Implement cold-start strategies leveraging metadata and multimodal embeddings. Work on user modeling , temporal personalization , and re-ranking strategies . Run A/B tests and interpret results to measure real-world impact. Collaborate with cross-functional teams (Engineering, Product, DevOps) for model deployment and monitoring. Must-Have Skills: Strong command of Python and ML libraries: pandas, polars, numpy, scikit-learn, matplotlib, tensorflow, torch, transformers. Deep understanding of modern recommender systems and embedding-based retrieval . Experience with TensorFlow , Keras , or PyTorch for building deep learning models. Hands-on with semantic search , ANN search , and real-time vector matching . Proven experience with Vertex AI , Kubeflow on GKE , and ML pipeline orchestration. Familiarity with vector DBs such as Qdrant , FAISS , ScaNN , or Matching Engine on GCP. Experience in deploying models via Vertex AI Online Prediction , TF Serving , or Cloud Run . Knowledge of feature stores , embedding versioning , and MLOps practices (CI/CD, monitoring). Preferred / Good to Have: Experience with ranking models (e.g., XGBoost , LightGBM , DLRM ) for candidate scoring. Exposure to LLM-powered personalization or hybrid retrieval systems. Familiarity with streaming pipelines using Pub/Sub , Dataflow , Cloud Functions . Hands-on with multi-modal retrieval (text + image + tabular data). Strong grasp of cold-start problem solving , using enriched metadata and embeddings. GCP Stack You’ll Work With: ML & Pipelines: Vertex AI, Vertex Pipelines, Kubeflow on GKE Embedding & Retrieval: Matching Engine, Qdrant, FAISS, ScaNN, Milvus Processing: Dataproc (PySpark), Dataflow Ingestion & Serving: Pub/Sub, Cloud Functions, Cloud Run, TF Serving CI/CD & Automation: GitHub Actions, GitLab CI, Terraform Show more Show less

Posted 15 hours ago

Apply

7.0 years

0 Lacs

Greater Hyderabad Area

Remote

Linkedin logo

Hi, Role: SAP FI Tax Consultant Location: Remote Note: Mandatory Skills SAP FI TAX ONESOURCE and vertex Required Skills: Bachelor’s degree in Accounting, Finance, Information Technology, or related field. Minimum of 7+ years of experience in SAP FI, with at least 2+ years in tax integration (ONESOURCE/Vertex). Strong understanding of indirect taxes including sales tax, use tax, VAT, GST, etc. Experience with SAP ECC and/or S/4HANA. Familiarity with SAP SD/MM tax determination is a plus. Experience with SAP interfaces/middleware (e.g., PI/PO, CPI, IDocs, or APIs). Knowledge of global tax compliance requirements is highly preferred. Strong problem-solving and communication skills. Responsibilities: Lead and support the integration of SAP FI with ONESOURCE and/or Vertex for indirect tax automation and compliance. Configure and support tax codes, tax jurisdiction structures, condition techniques, and tax procedures in SAP. Coordinate with tax teams to ensure accurate tax determination and reporting in alignment with global tax regulations. Manage and troubleshoot SAP–ONESOURCE and SAP–Vertex interface issues. Collaborate with technical teams to ensure seamless API/web service or batch file-based integration. Analyze business requirements and provide functional specifications for technical development and enhancements. Support testing activities including unit testing, integration testing, and UAT. Maintain documentation and provide training/support to business users. Monitor and resolve issues related to tax calculation errors, interface failures, and data integrity. Ensure compliance with tax regulations for various countries (US, EU, LATAM, APAC, etc.). Participate in SAP upgrades, patching, and tax software updates. Regards, Ravi Battu Email: raviraja.b@spaplc.com TAG Team Loc. 3 Cube Towers, F93C+X29, White Field Rd, Whitefield’s, HITEC City, Kondapur, Telangana 500081 www.spaplc.com Show more Show less

Posted 16 hours ago

Apply

8.0 years

0 Lacs

India

On-site

Linkedin logo

BayApps is looking for an Oracle EBS Financials Functional Analyst to be the focal point for support and enhancement of Oracle EBS Financials business processes. Key activities for this role will include business process refinement, solution design, configuring EBS modules, testing, and end user support for key Finance modules in a global Oracle environment. The candidate will be a part of the Finance Solutions Delivery organization, and will have technical ownership of all aspects from project implementation to process enhancements to sustaining support. Responsibilities : Work closely with business stakeholders and users to gather the end-user requirements and communicate IT priorities and delivery status to the business units Development of test scenarios and test cases, orchestrate the execution, test run validation of functional user testing Design and development of third party integrations, operational workflows, the development and execution of the roll-out strategies, cut-over plans, end-user training and support and end-user documentation Understand, communicate, and educate on the complexities, interdependencies and data flow of business processes across Oracle EBS finance modules, including GL, AP, AR, CM, FA and ebtax Development of clear functional business requirements/specifications Troubleshooting production issues through discussion with end users and technical resources, including problem recognition, research isolation and resolution steps. Maintain the health and effectiveness of the Oracle platform over time Take ownership of issues and work with business users and the development team to find resolutions Provide day-to-day functional support and troubleshooting including table level SQL research queries Drive open and comprehensive communications with key stakeholders, managing their expectations through clear and frequent communications Maintain and modify configuration, security, and access of Oracle modules Create and maintain application and process documentation, as well as training materials Guide and lead testing activities from unit testing to Production validation Qualifications : Minimum of 8 years of experience with Oracle R12.2 Financials modules, including AGIS, Advance Collections, Consolidations(financial package creation etc), GL, AP, AR, XLA, CM, FA, EBTax, iExpense, Experience working on Oracle Enterprise Command Centers, Lockbox Payments, Customer epayments such as Credit Card or ACH Good understanding of financial tables and SQL technology Strong Subledger accounting knowledge is a must. Should be able to analyze and identify any root causes in case of accounting and during period close issues. Experience with the below modules will be considered a plus, Inventory, Purchasing, OM, Service Contracts, Installed Base Experience with the below tools is a plus, DOMO, Vertex, OneSource, Pagero, Revpro, Getpaid, Cybersource, Runpayments Experience with Salesforce is a plus Must be an effective communicator (written and oral) across all levels of organization, including users, developers and management Must have experience documenting requirements and developing system / user test plans Show more Show less

Posted 16 hours ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Entity : - Accenture Strategy & Consulting Team : - Global Network - Data & AI Practice : - CMT – Software & Platforms Title : - Level 9 - Ind & Func AI Decision Science Consultant Job location : - Hyderabad/Bangalore About S&C - Global Network: - Accenture Global Network - Data & AI practice help our clients grow their business in entirely new ways. Analytics enables our clients to achieve high performance through insights from data - insights that inform better decisions and strengthen customer relationships. From strategy to execution, Accenture works with organizations to develop analytic capabilities - from accessing and reporting on data to predictive modelling - to outperform the competition About the Software & Platforms Team : - The team is focused on driving Data & AI based solutions for SaaS and PaaS clients for Accenture. The team collaborates actively with onsite counterparts to help identify opportunities for growth as well as drives client deliveries from offshore. WHAT’S IN IT FOR YOU? As part of our Data & AI practice, you will join a worldwide network of smart and driven colleagues experienced in leading statistical tools, methods, and applications. From data to analytics and insights to actions, our forward-thinking consultants provide analytically informed, issue-based insights at scale to help our clients improve outcomes and achieve high performance. Accenture will continually invest in your learning and growth. You'll work with experts in SaaS & PaaS and Accenture will support you in growing your own tech stack and certifications. In Data & AI you will understands the importance of sound analytical decision-making, relationship of tasks to the overall project, and executes projects in the context of a business performance improvement initiative. What You Would Do In This Role Gathering business requirements to create high level business solution framework aligning with business objectives and goals. Monitor project progress able to plan project plan, proactively identify risks, and develop mitigation strategies. Work closely with project leads, engineers, and business analysts to develop AI solutions. Develop & test AI algorithms and techniques tailored to solve specific business problems. Present and communicate solutions and project updates to internal & external stakeholders. Foster positive client relationships by ensuring alignment between project deliverables and client expectations. Adopt a clear and systematic approach to complex issues. Analyze relationships between several parts of a problem or situation. Anticipate obstacles and identify a critical path for a project Mentor and guide a team of AI professionals, cultivating a culture of innovation, collaboration, and excellence. Conduct comprehensive market research and stay updated on the latest advancements and trends in AI technologies. Foster the professional development of team members through continuous learning opportunities. Who are we looking for? Bachelor’s or master’s degree in computer science, engineering, data science, or a related field. Experience working for large Software or Platform organizations. Proven experience (5+ years) in working on AI projects and delivering successful outcomes. Hands-On exposure to Generative AI frameworks (Azure Open AI, Vertex AI) and implementations & Strong knowledge of AI technologies, including Embedding, prompt engineering, natural language processing, computer vision, etc. Hands on experience in building and deployment of Statistical Models/Machine Learning including Segmentation & predictive modelling, hypothesis testing, multivariate statistical analysis, time series techniques, and optimization. Proficiency in statistical packages such as R, Python, Java, SQL, Spark, etc. Ability to work with large data sets and present findings / insights to key stakeholders; Data management using databases like SQL Experience in training large language models and fine-tuning for specific applications or domains. Understanding of linguistic concepts, encompassing syntax, semantics, and pragmatics, to enhance language modeling. Experience with cloud platforms like AWS, Azure, or Google Cloud for deploying and scaling language models. Understanding of containerization technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes) for managing and deploying models & exposure of CI/CD pipelines for automated testing and deployment of language models. Excellent analytical and problem-solving skills, with a data-driven mindset. Strong project management abilities, including planning, resource management, and risk assessment. Proficient in Excel, MS word, PowerPoint, etc. Exceptional communication and interpersonal skills to engage effectively with clients and internal stakeholders. Accenture is an equal opportunities employer and welcomes applications from all sections of society and does not discriminate on grounds of race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, or any other basis as protected by applicable law. Show more Show less

Posted 22 hours ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Organization Snapshot: Birdeye is the leading all-in-one Experience Marketing platform , trusted by over 100,000+ businesses worldwide to power customer acquisition, engagement, and retention through AI-driven automation and reputation intelligence. From local businesses to global enterprises, Birdeye enables brands to deliver exceptional customer experiences across every digital touchpoint. As we enter our next phase of global scale and product-led growth , AI is no longer an add-on—it’s at the very heart of our innovation strategy . Our future is being built on Large Language Models (LLMs), Generative AI, Conversational AI, and intelligent automation that can personalize and enhance every customer interaction in real time. Job Overview: Birdeye is seeking a Senior Data Scientist – NLP & Generative AI to help reimagine how businesses interact with customers at scale through production-grade, LLM-powered AI systems . If you’re passionate about building autonomous, intelligent, and conversational systems , this role offers the perfect platform to shape the next generation of agentic AI technologies. As part of our core AI/ML team, you'll design, deploy, and optimize end-to-end intelligent systems —spanning LLM fine-tuning , Conversational AI , Natural Language Understanding (NLU) , Retrieval-Augmented Generation (RAG) , and Autonomous Agent frameworks . This is a high-impact IC role ideal for technologists who thrive at the intersection of deep NLP research and scalable engineering . Key Responsibilities: LLM, GenAI & Agentic AI Systems Architect and deploy LLM-based frameworks using GPT, LLaMA, Claude, Mistral, and open-source models. Implement fine-tuning , LoRA , PEFT , instruction tuning , and prompt tuning strategies for production-grade performance. Build autonomous AI agents with tool use , short/long-term memory , planning , and multi-agent orchestration (using LangChain Agents, Semantic Kernel, Haystack, or custom frameworks). Design RAG pipelines with vector databases ( Pinecone , FAISS , Weaviate ) for domain-specific contextualization. Conversational AI & NLP Engineering Build Transformer-based Conversational AI systems for dynamic, goal-oriented dialog—leveraging orchestration tools like LangChain, Rasa, and LLMFlow. Implement NLP solutions for semantic search , NER , summarization , intent detection , text classification , and knowledge extraction . Integrate modern NLP toolkits: SpaCy, BERT/RoBERTa, GloVe, Word2Vec, NLTK , and HuggingFace Transformers . Handle multilingual NLP, contextual embeddings, and dialogue state tracking for real-time systems. Scalable AI/ML Engineering Build and serve models using Python , FastAPI , gRPC , and REST APIs . Containerize applications with Docker , deploy using Kubernetes , and orchestrate with CI/CD workflows. Ensure production-grade reliability, latency optimization, observability, and failover mechanisms. Cloud & MLOps Infrastructure Deploy on AWS SageMaker , Azure ML Studio , or Google Vertex AI , integrating with serverless and auto-scaling services. Own end-to-end MLOps pipelines : model training, versioning, monitoring, and retraining using MLflow , Kubeflow , or TFX . Cross-Functional Collaboration Partner with Product, Engineering, and Design teams to define AI-first experiences. Translate ambiguous business problems into structured ML/AI projects with measurable ROI. Contribute to roadmap planning, POCs, technical whitepapers, and architectural reviews. Technical Skillset Required Programming : Expert in Python , with strong OOP and data structure fundamentals. Frameworks : Proficient in PyTorch , TensorFlow , Hugging Face Transformers , LangChain , OpenAI/Anthropic APIs . NLP/LLM : Strong grasp of Transformer architecture , Attention mechanisms , self-supervised learning , and LLM evaluation techniques . MLOps : Skilled in CI/CD tools, FastAPI , Docker , Kubernetes , and deployment automation on AWS/Azure/GCP . Databases : Hands-on with SQL/NoSQL databases, Vector DBs , and retrieval systems. Tooling : Familiarity with Haystack , Rasa , Semantic Kernel , LangChain Agents , and memory-based orchestration for agents. Applied Research : Experience integrating recent GenAI research (AutoGPT-style agents, Toolformer, etc.) into production systems. Bonus Points Contributions to open-source NLP or LLM projects. Publications in AI/NLP/ML conferences or journals. Experience in Online Reputation Management (ORM) , martech, or CX platforms. Familiarity with reinforcement learning , multi-modal AI , or few-shot learning at scale. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

About AdZeta AdZeta is a technology company that leverages AI-powered smart bidding technology to drive high LTV and profitability for e-commerce and D2C brands. We turn first-party data into predictive, value-based bidding and personalised customer journeys. As we expand our data practice, we’re searching for a Data Strategist who can translate complex datasets into clear business stories that drive measurable revenue lift. End-to-End Analytics > Build and maintain marketing dashboards in Tableau or Power BI that surface channel-level ROI, LTV, and incrementality. Data Engineering & ETL > Design ETL pipelines in Google Cloud (BigQuery) or comparable environments; automate data blends from GA4, Adobe Analytics, CRMs, and CDPs. Audience & Personalisation > Partner with media teams to create high-value audiences; inform campaign personalisation strategies using segmentation, propensity scoring, and AI models. Storytelling & Advisory > Turn raw numbers into board-ready insights; present findings that influence creative, media, and product road-maps. Experimentation & AI > Test and deploy AI/ML frameworks to predict churn, optimise bidding, and generate next-best-action recommendations. Required Skills & Experience 5+ years in marketing analytics, mar-tech, or data-driven consulting. Proficiency in SQL plus one BI tool (Tableau or Power BI). Hands-on experience with GA4, Adobe Analytics, and at least one CDP or audience platform. Fluency in ETL concepts and cloud data warehouses (BigQuery / GCP preferred). Strong data-storytelling chops—able to persuade both technical and non-technical stakeholders. Exposure to AI/ML concepts or tooling (Vertex AI, AutoML, or similar) is a big plus. Nice-to-Have Experience with campaign analytics for paid search, paid social, or programmatic. Familiarity with server-side tagging, GTM, or Cloud Functions. Previous work in e-commerce, D2C, or subscription businesses. Why AdZeta Remote-first & async-friendly culture with flexible PTO. Ownership: competitive salary + equity option pool. Annual learning stipend for certs, conferences, or AI experimentation. Direct line of sight to C-suite; your insights shape product and go-to-market road-maps. Application Process Apply via LinkedIn with a short note on a recent analytics project you loved. 30-min intro call with People team. Data deep-dive & whiteboard session with Analytics Lead. Final culture-fit chat with Founder & CEO. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a highly skilled and experienced Senior Cloud Native Developer to join our team and drive the design, development, and delivery of cutting-edge cloud-based solutions on Google Cloud Platform (GCP). This role emphasizes technical expertise, best practices in cloud-native development, and a proactive approach to implementing scalable and secure cloud solutions. Responsibilities Design, develop, and deploy cloud-based solutions using GCP, adhering to architecture standards and best practices Code and implement Java applications using GCP Native Services like GKE, CloudRun, Functions, Firestore, CloudSQL, and Pub/Sub Select appropriate GCP services to address functional and non-functional requirements Demonstrate deep expertise in GCP PaaS, Serverless, and Database services Ensure compliance with security and regulatory standards across all cloud solutions Optimize cloud-based solutions to enhance performance, scalability, and cost-efficiency Stay updated on emerging cloud technologies and trends in the industry Collaborate with cross-functional teams to architect and deliver successful cloud implementations Leverage foundational knowledge of GCP AI services, including Vertex AI, Code Bison, and Gemini models when applicable Requirements 5+ years of extensive experience in designing, implementing, and maintaining applications on GCP Comprehensive expertise in using GCP services, including GKE, CloudRun, Functions, Firestore, Firebase, and Cloud SQL Knowledge of advanced GCP services, such as Apigee, Spanner, Memorystore, Service Mesh, Gemini Code Assist, Vertex AI, and Cloud Monitoring Solid understanding of cloud security best practices and expertise in implementing security controls in GCP Proficiency in cloud architecture principles and best practices, with a focus on scalable and reliable solutions Experience with automation and configuration management tools, particularly Terraform, along with a strong grasp of DevOps principles Familiarity with front-end technologies like Angular or React Nice to have Familiarity with GCP GenAI solutions and models, including Vertex AI, Codebison, and Gemini models Background in working with front-end frameworks and technologies to complement back-end cloud development Capability to design end-to-end solutions integrating modern AI and cloud technologies We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)

Posted 2 days ago

Apply

5.0 years

0 Lacs

Gurgaon

On-site

EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a highly skilled and experienced Senior Cloud Native Developer to join our team and drive the design, development, and delivery of cutting-edge cloud-based solutions on Google Cloud Platform (GCP). This role emphasizes technical expertise, best practices in cloud-native development, and a proactive approach to implementing scalable and secure cloud solutions. Responsibilities Design, develop, and deploy cloud-based solutions using GCP, adhering to architecture standards and best practices Code and implement Java applications using GCP Native Services like GKE, CloudRun, Functions, Firestore, CloudSQL, and Pub/Sub Select appropriate GCP services to address functional and non-functional requirements Demonstrate deep expertise in GCP PaaS, Serverless, and Database services Ensure compliance with security and regulatory standards across all cloud solutions Optimize cloud-based solutions to enhance performance, scalability, and cost-efficiency Stay updated on emerging cloud technologies and trends in the industry Collaborate with cross-functional teams to architect and deliver successful cloud implementations Leverage foundational knowledge of GCP AI services, including Vertex AI, Code Bison, and Gemini models when applicable Requirements 5+ years of extensive experience in designing, implementing, and maintaining applications on GCP Comprehensive expertise in using GCP services, including GKE, CloudRun, Functions, Firestore, Firebase, and Cloud SQL Knowledge of advanced GCP services, such as Apigee, Spanner, Memorystore, Service Mesh, Gemini Code Assist, Vertex AI, and Cloud Monitoring Solid understanding of cloud security best practices and expertise in implementing security controls in GCP Proficiency in cloud architecture principles and best practices, with a focus on scalable and reliable solutions Experience with automation and configuration management tools, particularly Terraform, along with a strong grasp of DevOps principles Familiarity with front-end technologies like Angular or React Nice to have Familiarity with GCP GenAI solutions and models, including Vertex AI, Codebison, and Gemini models Background in working with front-end frameworks and technologies to complement back-end cloud development Capability to design end-to-end solutions integrating modern AI and cloud technologies We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)

Posted 2 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Who We Are Motive empowers the people who run physical operations with tools to make their work safer, more productive, and more profitable. For the first time ever, safety, operations and finance teams can manage their drivers, vehicles, equipment, and fleet related spend in a single system. Combined with industry leading AI, the Motive platform gives you complete visibility and control, and significantly reduces manual workloads by automating and simplifying tasks. Motive serves more than 100,000 customers – from Fortune 500 enterprises to small businesses – across a wide range of industries, including transportation and logistics, construction, energy, field service, manufacturing, agriculture, food and beverage, retail, and the public sector. Visit gomotive.com to learn more. About The Role The QA Manager of Enterprise Systems Engineering is responsible for leading a cross-functional team of engineers who test, validate, and support enterprise software applications to ensure the delivery of high-quality, scalable solutions. This is a technical leadership role that will engage in the most complex and critical quality assurance challenges affecting Motive’s business operations from top of the funnel through order processing, customer support and experience, billing, revenue and more. The ideal candidate will drive automation initiatives, optimize testing strategies, and ensure high-quality software delivery across complex, scalable systems. You will play a pivotal role in setting technical direction, defining success metrics, and leading teams to achieve business-critical objectives. What You'll Do Reporting to the Director of Enterprise Systems, this role leads an Agile QA operation that designs and implements scalable testing strategies for Lead to Cash systems which today include Salesforce, Salesforce packages and integrated solutions like Zuora, RevPro, Vertex, Boomi, Netsuite, and more Lead a motivated cross-discipline team of QA engineers, manual testers, automation engineers, and quality analysts in ensuring the quality of enterprise applications and integrations. Guide teams in innovative use of QA tools and methodologies, including test automation frameworks, regression testing strategies, performance testing, and integration testing, to ensure robust quality assurance for business requirements. Commitment to continuous improvement. Working to improve communication, collaboration and alignment within the QA team and with cross-functional teams, inside and outside of the organization. Work with QA and technical teams to establish best practices, standards and operational guidelines, with focus on testing efficiency, automation coverage, and defect prevention / reduction. Be the subject matter expert in driving the industry best practices for QA processes in the L2C ecosystem and associated integrated tools. Stay current on L2C system releases, new features, product roadmaps, QA trends, tools, test automation frameworks, and industry advancements. Collaborate with Product Management, the business and key IT stakeholders to plan, prioritize, and schedule testing activities, ensuring thorough validation of applications and integrations.. Deliver quality assurance within a SOX compliance control environment with proper defect tracking and change management process. Accountable for defect leakage, testing density/coverage, and overall product quality, including ownership of QA metrics & testing lifecycle. Provide oversight in standards adherence through reviews of test plans, test cases, and defect tracking. Strategic & Cross-functional Collaboration Collaborate with leadership to establish OKRs and headcount strategy. Play an active role in defining the future state of QA engineering and planning technology roadmaps. People & Performance Management Develop and execute a performance and development strategy for one or more QA teams. Partner with department management to proactively plan staffing needs and resource allocation. Act as a mentor and coach for career development, ensuring high engagement, performance reviews, and conflict resolution. Implement strategies to mitigate burnout and foster a high-performance culture Proven track record of managing large teams in a fast-paced environment. Strong expertise in automation frameworks, CI/CD pipeline(baseline) , and scalable testing methodologies. What We're Looking For BS/MS degree in Computer Science with 5+ years management or leadership experience in this field Experience with automation testing frameworks and tools such as Selenium, Playwright, or equivalent. Proven experience in driving QA processes and strategies for enterprise systems, including Salesforce technologies (e.g., Sales/CRM, Service, CPQ, Commerce) Strong understanding of QA methodologies, tools, and processes. Hands-on experience in developing and executing comprehensive test plans, test cases, and automation test scripts Proven experience in managing onshore / offshore models with a hybrid of vendors/consultants and FTEs Understanding of Salesforce configuration / customization principles to collaborate effectively with engineering teams. Ability to create an environment for honest and open discussion to resolve critical issues by collaborating with team members. Excellent spoken and written communication skills with ability to present complex ideas in a clear, concise fashion to technical and non-technical audiences. Ability to deal with ambiguity and thrive in a rapidly changing business environment. Experience with QA tools for Devops and Source Code management. Experience with Agile methodologies, including scrum and continuous integration environments (Copado, Gearset, Autorabit etc). Proven track record of enhancing QA processes within Agile framework Possess strong analytical skills to interpret data, identify trends, and draw meaningful conclusions to assess the quality of team / function. Creating a diverse and inclusive workplace is one of Motive's core values. We are an equal opportunity employer and welcome people of different backgrounds, experiences, abilities and perspectives. Please review our Candidate Privacy Notice here . UK Candidate Privacy Notice here. The applicant must be authorized to receive and access those commodities and technologies controlled under U.S. Export Administration Regulations. It is Motive's policy to require that employees be authorized to receive access to Motive products and technology. Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less

Posted 2 days ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

What You Will Be Doing Engage in Hands-On Coding: You will dedicate most of your time to hands-on coding, developing, and owning features across the entire product stack from conception to deployment. We have async systems to promote development without being bogged down in meetings all day! Build Systems and Cloud Infrastructure: You will design, build, and scale robust systems and cloud infrastructure to support our applications and enable secure, efficient interactions with major AI platforms. Define Technical Strategy and Culture: You will collaborate closely with the founding team to shape technical strategies, influence product development, and establish our engineering culture and best practices. Proactive Problem Solving: You will drive success by proactively contributing wherever you can, without waiting for established processes. Embrace Fast-Paced Development: You thrive in a fast-paced environment, iterating and testing in line with lean startup best practices, always eager to learn. Demonstrate Quality and Foresight: You meticulously analyze potential outcomes and demonstrate foresight in considering various situations, ensuring high-quality results in all aspects of your work. What Are We Looking For Experience at companies operating at scale Built Complex Systems from Scratch Experience with Cloud Infrastructure Previous startup experience (Nice to Have) Experience building in the AI stack (Nice to Have; or a hunger to learn) Our Tech Stack Basic: Python, Ruby on Rails, Next.js AI: Open AI, anthropic, Google Vertex AI Infra: Azure, K8s Our Values Owner Mentality: We take pride in our work, treat the company as our own, and proactively take initiative to drive success. Obsession with Building and Deploying: We are highly obsessed with building and deploying exceptional products, relentlessly striving to enhance the customer experience. Continuous Learning: Like AI models, we grow from our experiences, constantly improving ourselves and our work. Continuous Iteration: We embrace the power of refinement, welcoming feedback to iteratively enhance our products and processes. Collaborative: We are a team of high achievers; we enjoy each other's company and collaborate effectively. Humility: We have strong opinions but hold them loosely, collaborating without ego and valuing diverse perspectives. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Position Title : AI/ML Engineer. Company : Cyfuture India Pvt.Ltd. Industry : IT Services and IT Consulting. Location : Sector 81, NSEZ, Noida (5 Days Work From Office). About Cyfuture Cyfuture is a trusted name in IT services and cloud infrastructure, offering state-of-the-art data center solutions and managed services across platforms like AWS, Azure, and VMWare. We are expanding rapidly in system integration and managed services, building strong alliances with global OEMs like VMWare, AWS, Azure, HP, Dell, Lenovo, and Palo Alto. Position Overview We are hiring an experienced AI/ML Engineer to lead and shape our AI/ML initiatives. The ideal candidate will have hands-on experience in machine learning and artificial intelligence, with strong leadership capabilities and a passion for delivering production-ready solutions. This role involves end-to-end ownership of AI/ML projects, from strategy development to deployment and optimization of large-scale systems. Key Responsibilities Lead and mentor a high-performing AI/ML team. Design and execute AI/ML strategies aligned with business goals. Collaborate with product and engineering teams to identify impactful AI opportunities. Build, train, fine-tune, and deploy ML models in production environments. Manage operations of LLMs and other AI models using modern cloud and MLOps tools. Implement scalable and automated ML pipelines (e., with Kubeflow or MLRun). Handle containerization and orchestration using Docker and Kubernetes. Optimize GPU/TPU resources for training and inference tasks. Develop efficient RAG pipelines with low latency and high retrieval accuracy. Automate CI/CD workflows for continuous integration and delivery of ML systems. Key Skills & Expertise Cloud Computing & Deployment : Proficiency in AWS, Google Cloud, or Azure for scalable model deployment. Familiarity with cloud-native services like AWS SageMaker, Google Vertex AI, or Azure ML. Expertise in Docker and Kubernetes for containerized deployments. Experience with Infrastructure as Code (IaC) using tools like Terraform or CloudFormation. Machine Learning & Deep Learning Strong command of frameworks : TensorFlow, PyTorch, Scikit-learn, XGBoost. Experience with MLOps tools for integration, monitoring, and automation. Expertise in pre-trained models, transfer learning, and designing custom architectures. Programming & Software Engineering Strong skills in Python (NumPy, Pandas, Matplotlib, SciPy) for ML development. Backend/API development with FastAPI, Flask, or Django. Database handling with SQL and NoSQL (PostgreSQL, MongoDB, BigQuery). Familiarity with CI/CD pipelines (GitHub Actions, Jenkins). Scalable AI Systems Proven ability to build AI-driven applications at scale. Handle large datasets, high-throughput requests, and real-time inference. Knowledge of distributed computing : Apache Spark, Dask, Ray. Model Monitoring & Optimization Hands-on with model compression, quantization, and pruning. A/B testing and performance tracking in production. Knowledge of model retraining pipelines for continuous learning. Resource Optimization Efficient use of compute resources : GPUs, TPUs, CPUs. Experience with serverless architectures to reduce cost. Auto-scaling and load balancing for high-traffic systems. Problem-Solving & Collaboration Translate complex ML models into user-friendly applications. Work effectively with data scientists, engineers, and product teams. Write clear technical documentation and architecture reports. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies