Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
20 - 22 Lacs
indore, pune, chennai
Work from Office
We are seeking a highly skilled Senior Data Scientist / Conversational AI Architect to lead the design, development, and deployment of advanced conversational AI solutions using Google Cloud's Vertex AI and Dialogflow CX. This role requires deep expertise in building, testing, and productionizing voice and chat agents, with hands-on experience in creating complex flows, playbooks, and evaluation frameworks. You will be responsible for driving agent development from concept to production, ensuring high-quality user experiences across both voice and chat channels. A strong foundation in Python (Cloud Functions, Colab) and SQL (preferably BigQuery) is essential for building scalable solutions and performing data-driven evaluations. The ideal candidate will also possess excellent analytical skills to interpret user behavior and optimize agent performance. As a team lead, you will guide a team of developers and data scientists, manage stakeholder expectations, and ensure effective communication across business and technical teams. Familiarity with Agile program management methodologies is a plus. This role is best suited for someone with a blend of technical, leadership, and strategic thinking capabilities, able to translate complex requirements into impactful conversational AI solutions.
Posted 1 week ago
12.0 - 16.0 years
25 - 40 Lacs
pune
Work from Office
We are seeking a highly skilled and experienced GCP Cloud Architect to lead the design, development, and optimization of cloud-based solutions on the Google Cloud Platform. This role requires a deep understanding of cloud architecture, GCP services, and best practices for building scalable, secure, and efficient cloud environments. The ideal candidate will have hands-on experience with GCP tools, infrastructure automation, and cloud security. Must have skills GCP Expertise: In-depth knowledge of GCP services, including Compute Engine, Kubernetes Engine, Cloud Storage, BigQuery, and Pub/Sub. Infrastructure Automation: Proficiency with IaC tools like Terraform or GCP Deployment Manager. Cloud Security: Strong understanding of IAM, VPC, firewall configurations, and encryption in GCP. Programming Skills: Proficiency in Python, Go, or other programming languages for automation and scripting. Data Engineering Knowledge: Familiarity with data pipelines and services like Dataflow, Dataproc, or BigQuery. Monitoring and Observability: Hands-on experience with GCP monitoring tools and best practices. Problem-solving: Strong analytical and troubleshooting skills. Collaboration: Excellent communication and teamwork abilities to work effectively across teams. Roles and responsibilities Architecture Design: Design and develop robust and scalable cloud architectures leveraging GCP services to meet business requirements. Cloud Migration and Optimization: Lead cloud migration initiatives and optimize existing GCP environments for cost, performance, and reliability. Infrastructure as Code (IaC): Implement and maintain infrastructure automation using tools like Terraform or Deployment Manager. Integration with Cloud Services: Integrate various GCP services (e.g., BigQuery, Pub/Sub, Cloud Functions) into cohesive and efficient solutions. Observability and Monitoring: Set up monitoring, logging, and alerting using GCP-native tools like Cloud Monitoring and Cloud Logging. Security and Compliance: Ensure cloud environments adhere to industry security standards and compliance requirements, implementing IAM policies, encryption, and network security measures. Collaboration: Work with cross-functional teams, including DevOps, data engineers, and application developers, to deliver cloud solutions. Preferred Qualifications: GCP certifications (e.g., Professional Cloud Architect, Professional Data Engineer). Experience with container orchestration using Kubernetes (GKE). Knowledge of CI/CD pipelines and DevOps practices. Exposure to AI/ML services like Vertex AI.
Posted 1 week ago
6.0 - 8.0 years
0 Lacs
gurgaon, haryana, india
On-site
MLOps We are looking for a highly skilled Analytics & Data Engineering professional with a strong background in Machine Learning, MLOps, and DevOps . The ideal candidate will have experience designing and implementing scalable data and analytics pipelines, enabling production-grade ML systems, and supporting agent-based development leveraging MCP/OpenAPI to MCP wrapper and A2A protocols . This role combines hands-on technical work with solution design, and will require close collaboration with data scientists, product teams, and engineering stakeholders. Key Responsibilities Design, build, and maintain scalable data pipelines and ETL/ELT processes for analytics and ML workloads. Implement MLOps frameworks to manage model lifecycle (training, deployment, monitoring, and retraining). Apply DevOps best practices (CI/CD, containerization, infrastructure as code) to ML and data engineering workflows. Develop and optimize data models, feature stores, and ML serving architectures. Collaborate with AI/ML teams to integrate models into production environments. Support agent development using MCP/OpenAPI to MCP wrapper and A2A (Agent-to-Agent) communication protocols. Ensure data quality, governance, and compliance with security best practices. Troubleshoot and optimize data workflows for performance and reliability. Required Skills & Experience Core : 6+ years in analytics and data engineering roles. Proficiency in SQL, Python, and data pipeline orchestration tools (e.g., Airflow, Prefect). Experience with distributed data processing frameworks (e.g., Spark, Databricks). ML/MLOps : Experience deploying and maintaining ML models in production. Knowledge of MLOps tools (MLflow, Kubeflow, SageMaker, Vertex AI, etc.). DevOps : Hands-on experience with CI/CD (Jenkins, GitHub Actions, GitLab CI). Proficiency with Docker, Kubernetes, and cloud-based deployment (AWS, Azure, GCP). Specialized : Experience with MCP/OpenAPI to MCP wrapper integrations. Experience working with A2A protocols in agent development. Familiarity with agent-based architectures and multi-agent communication patterns. Preferred Qualifications Master's degree in Computer Science, Data Engineering, or related field. Experience in real-time analytics and streaming data pipelines (Kafka, Kinesis, Pub/Sub). Exposure to LLM-based systems or intelligent agents. Strong problem-solving skills and ability to work in cross-functional teams.
Posted 1 week ago
4.0 - 9.0 years
12 - 22 Lacs
bangalore rural, bengaluru
Work from Office
Role & responsibilities Build ML Models: Design and develop machine learning models using techniques like regression, classification, clustering, deep learning, and NLP to solve real business problems. Solve Business Problems: Turn unclear business questions into clear data problems and create smart AI/ML solutions that add value. Own the ML Lifecycle: Work on the full journey of ML projects from exploring data and building prototypes to deploying models and improving them over time. Work with Data: Analyze and clean structured and semi-structured data to prepare it for modeling. Use the Right Algorithms: Apply the best ML algorithms and understand how they work under the hood. Optimize & Automate: Monitor model performance and use MLOps tools to automate testing, deployment, and retraining. Collaborate & Communicate: Work closely with engineers, product teams, and business stakeholders. Explain technical ideas in simple terms. Stay Updated: Keep learning about new tools, techniques, and trends in AI/ML. Use Cloud Tools (Nice to Have): Experience with cloud platforms like GCP and tools like Vertex AI is a plus Skills: Strong understanding of statistics, probability, and ML algorithms . Hands-on experience with Python (pandas, scikit-learn, TensorFlow, PyTorch). Good with SQL and data visualization. Experience with model evaluation and feature engineering. Ability to explain models and results clearly Bonus Skills Experience with deep learning (CNNs, RNNs, Transformers). Familiarity with MLOps tools (MLflow, TFX, Docker, Airflow). Cloud experience (GCP, AWS, Azure). Strong communication and a passion for learning
Posted 2 weeks ago
10.0 - 12.0 years
0 Lacs
mumbai, maharashtra, india
Remote
India is am ong the top ten priority markets for General Mills, and hosts our Global Shared Services Centre. This is the Global Shared Services arm of General Mills Inc., which supports its operations worldwide. With over 1,300 employees in Mumbai, the center has capabilities in the areas of Supply Chain, Finance, HR, Digital and Technology, Sales Capabilities, Consumer Insights, ITQ (R&D & Quality), and Enterprise Business Services. Learning and capacity-building is a key ingredient of our success. Position Title Sr. D&T Machine Learning Engineer Function/Group Digital and Technology Location Mumbai Shift Timing Regular Role Reports to Manager-MLE Remote/Hybrid/in-Office Remote ABOUT GENERAL MILLS We make foodthe world loves: 100 brands. In 100 countries. Across six continents. With iconic brands like Cheerios, Pillsbury, Betty Crocker, Nature Valley, and Hagen-Dazs, we've been serving up food the world loves for 155 years (and counting). Each of our brands has a unique story to tell. How we make our food is as important as the food we make. Our values are baked into our legacy and continue to accelerate us into the future as an innovative force for good. General Mills was founded in 1866 when Cadwallader Washburn boldly bought the largest flour mill west of the Mississippi. That pioneering spirit lives on today through our leadership team who upholds a vision of relentless innovation while being a force for good. For more details check out General Mills India Center (GIC) is our global capability center in Mumbai that works as an extension of our global organization delivering business value, service excellence and growth, while standing for good for our planet and people. With our team of 1800+ professionals, we deliver superior value across the areas of Supply chain (SC) , Digital & Technology (D&T) Innovation, Technology & Quality (ITQ), Consumer and Market Intelligence (CMI), Sales Strategy & Intelligence (SSI) , Global Shared Services (GSS) , Finance Shared Services (FSS) and Human Resources Shared Services (HRSS). For more details check out We advocate for advancing equity and inclusion to create more equitable workplaces and a better tomorrow. JOB OVERVIEW Function Overview The Digital and Technology team at General Mills stands as the largest and foremost unit, dedicated to exploring the latest trends and innovations in technology while leading the adoption of cutting-edge technologies across the organization. Collaborating closely with global business teams, the focus is on understanding business models and identifying opportunities to leverage technology for increased efficiency and disruption. The team's expertise spans a wide range of areas, including AI/ML, Data Science, IoT, NLP, Cloud, Infrastructure, RPA and Automation, Digital Transformation, Cyber Security, Blockchain, SAP S4 HANA and Enterprise Architecture. The MillsWorks initiative embodies an agile@scale delivery model, where business and technology teams operate cohesively in pods with a unified mission to deliver value for the company. Employees working on significant technology projects are recognized as Digital Transformation change agents. The team places a strong emphasis on service partnerships and employee engagement with a commitment to advancing equity and supporting communities. In fostering an inclusive culture, the team values individuals passionate about learning and growing with technology, exemplified by the Work with Heart philosophy, emphasizing results over facetime. Those intrigued by the prospect of contributing to the digital transformation journey of a Fortune 500 company are encouraged to explore more details about the function through the provided Purpose of the role General Mills, Digital and Technology India, is seeking Sr Machine Learning Engineer to join the Enterprise Data Capabilities Organization. This team builds enterprise level scalable and sustainable data and model pipelines to serve the analytic needs of business impacting problem statements. In this role, you are a critical member of the data science team focused to operationalize the ML and AI models, entails model management and monitoring too. The success is to recommend innovative ways to automate the MLOps pipelines on GCP and set standards that would ensure repeated success. This capability is leveraged to fuel advanced Analytical solutions, Machine Learning and Deep Learning . It is also responsible for implementing and enhancing community of practice to determine the best practices, standards, and MLOps frameworks to efficiently delivery enterprise data solutions at General Mills. This role works in close collaboration with Data Scientists, Data Engineers, Platform Engineers and Tech Expertise to support the analytic consumption needs. Enhances the performance of the models and automates the production pipelines to gain efficiency. KEY ACCOUNTABILITIES Establish and Implement MLOps practices: Development of end-to-end MLOps framework and Machine Learning Pipeline using GCP, Vertex AI and Software tools Management of data pipelines including config, ingestion and transformation from multiple data source like Big Query, Dbt & Google cloud storage etc Meta Data and statistics Data pipeline setup using GCP Bucket and MLMD Re-Training and Monitoring Pipeline setup with multiple criteria Vertex AI Serving Pipeline with multiple creation Vertex AI and GCP services Resource and Infra Monitoring configuration and pipeline development using GCP Automated pipeline Development for Continuous Integration (CI)/Continuous Deployment (CD) Continuous Monitoring (CM)/Continuous Training (CT) using GCP-native tool Branching strategies and Version Control using GitHub ML Pipeline orchestration and configuration using DAG and Workflow orchestration using airflow/cloud Code refactorization & coding best practices implementation as per industry standard Technology-Stack suggestion based on 360 Deg Implementing MLOps practices on project and follow the set MLOps Support the ML models throughout the E2E MLOps lifecycle from development to maintenance Architecture: Micro Services Architecture and framework Development concept Agile software Development concept Architecture Design for HLD, LLD and Solution design Team Mentoring: Programming language Pattern Design implementation Review projects PR and PBIs and suggestion for improvement Knowledge sharing session with team for specific ML Ops Guide/Mentor team members for MLOps framework development Research, Evolve and Publish best practices: Research and operationalize technology and processes necessary to scale ML Ops Ability to research and recommend MLOps best practices on new technologies, platforms, and MLOps pipeline improvement plan and suggestion Communication and Collaboration: Collaborate with technical teams like Data Science Lead, Data Scientist, Data Engineer and Platform Knowledge sharing with the broader analytics team and stakeholders is Communicate on the on-goings to embrace the remote and cross geography Align on the key priorities and focus Ability to communicate the accomplishments, failures, and risks in timely manner. Embrace learning mindset: Continually invest in your own knowledge and skillset through formal training, reading, and attending conferences and meetup Documentation: Document MLOps Process, Development, Architecture & Innovation etc and be instrumental in reviewing the same for other team members MINIMUM QUALIFICATIONS Total experience required 10-12 yrs Min qualification - Bachelor's degree (full time) Expertise and at least 5yrs of professional experience in MLOps E2E framewo rk Expertise in Data Transformation and Manipulation through Big-Query/SQL Professional experience Vertex AI and GCP Services Expertise in one of the programming Language Python/R Airflow/Cloud composer Experience Kubernetes/Kubeflow Experience MLflow Professional experience TFX Professional experience Docker -container Experience At least 5yrs of professional experience in the related field of Data Science Strong communication skills both verbal and written including the ability to interact effectively with colleagues of varying technical and non-technical Passionate about agile software processes, data-driven development, reliability, and systematic Expert level . ML Ops E2E framework . Big Query/SQL . Python / R . Vertex AI and GCP Services . Docker-Container . Kubeflow/Kubernetes . TFX . Airflow . MLflow . GitHub . Strong communication skills Intermediate level . Machine Learning and Deep Learning algorithms . Agile techniques . Demonstrates teamworking skills. . Mentor others and lead best practices. . Micro Services concept . Power BI, Tableau, Looker Basic Level . Good to have domain knowledge: Consumer Packed Goods industry and data sources . Analytic toolset- dbt, atscale, neo4j, Atlassian PREFERRED QUALIFICATIONS GCP certifi ca tion Understanding of CPG industry Bcsic understanding of dbt AutoML Concept Machine Learning -Concept of Algorithms Deep Learning- Concept of Algorithms Time Series Analysis- Concept of Algorithms
Posted 2 weeks ago
12.0 - 18.0 years
40 - 45 Lacs
pune, bengaluru, delhi / ncr
Hybrid
Greetings ! At present we have an urgent leadership opening with an esteemed client. Position : AI Architect - AVP / SAVP Job Location : Noida / Gurgaon / Pune / Bangalore ( Hybrid Model ) Mandatory Skills Required - AI Architect, Data Science, GenAI, Vertex AI, Python, NLP, Deep Learning Frameworks Designation- AI Architect Role- Permanent/ Full time Experience- 11-16 years Location- Noida/ Gurgaon/ Pune/ Bangalore Shift- 12:00 PM to 10PM (10 Hours Shift. Also depends on the project/work dependencies) Working Days- 5 days Work Mode- Hybrid Mandatory Skills- Transformer Models - either finetune or training on Open-Source Transformer based models. Build and fine-tune large-scale language models using PyTorch and modern libraries. Should have done projects on LLM, NLP, LLAMA, Hugging face, RAG, Gen AI. Hands-on on Python coding experience. Machine Learning or Deep Learning. Good experience into Agentic AI and should have done end -to-end projects in Gen AI. Skills: 1. Proven experience as an NLP and ML Engineer or similar role 2. Have worked on open-source LLM (like Falcon and llama) for various text generation task 3. Have done SFT or PEFT on decoder based LLM for a specific text generation task 4. Understanding of NLP techniques for text representation, semantic extraction techniques, data structures and modeling 5. Have worked on BERT models for intent classifiers and entity extraction techniques 6. Ability to write robust and testable code 7. Experience with machine learning frameworks (PyTorch) 8. Knowledge of Python, Groovy Script, BPMN 9. An analytical mind with problem-solving abilities 10. Degree in Computer Science, Mathematics, Computational Linguistics or similar field Responsibilities: 1. Study and transform data science prototypes 2. Design NLP applications 3. Use effective text representations to transform natural language into useful features 4. Find and implement the right algorithms and tools for NLP tasks 5. Develop NLP systems according to requirements 6. Train the developed model and run evaluation experiments 7. Perform statistical analysis of results and refine models 8. Extend ML libraries and frameworks to apply in NLP tasks 9. Remain updated in the rapidly changing field of machine learning Plz reply to the following queries along with your updated resume : Current Organization : Current Designation : Reason for change : Reporting to : Date of Birth : Qualification : PAN Number : Current CTC (Fixed + Variable) : CTC Expectation : Any other offer in hand or in process : If yes, Please mention the offer details : Official Notice Period : Are you serving the Notice period : Mention your Last Working Date : Present Team size : Maximum Team Size handled : Location Staying at : Location Interested in (Noida / Gurgaon / Pune / Bangalore) : Technical Skills Expertise : Domains worked on : Total years of experience : Yrs of exp as AI Architect : Yrs of exp in Data Scientist role : Yrs of exp in Artificial Intelligence : Yrs of exp in GenAI : No. of end -to-end projects done in Gen AI : Yrs of exp in Transformer based models : Yrs of exp in Open-Source Transformer based models : Mention the Transformer based models worked on (like BERT, T5, RoBerta) : Yrs of exp in Large Language Models (LLMs) : Yrs of exp in NLP : Yrs of exp into Hugging face : Yrs of exp in Retrieval-Augmented Generation (RAG) : Yrs of exp in Deep Learning Framework : Mention the deep learning frameworks you have worked on : Yrs of exp in Machine learning : Yrs of exp in open-source LLM (like Falcon and llama) : Mention the Open-source transformer models worked on : Yrs of exp in Pytorch : Yrs of exp in Tensorflow, Keras : Yrs of exp in Python : How much would you rate yourselves in Python on a scale of 1-10 (10 being the highest) : Yrs of exp in cloud platforms (AWS, Azure, GCP) : Ever interviewed by EXL SERVICE in past : About client : Client has been 25+ years in existence with 29000+ employees globally and deals in: Digital Intelligence: Outcomes: Real digital transformation creates deeper customer experiences, faster speed to market, growing revenues and profitability. Everything we do is outcome oriented, ensuring the business is not only transformed, its built to stay ahead. Context: Were experts in more than technology and advanced analytics; we’re experts in our clients’ industries and businesses. This enables us to look deeper to identify and capitalize on opportunities to outperform. Orchestration: Our digital specialists understand both technology and the context in which it is applied. This enables to orchestrate complex and interdependent technologies – AI, robotics, analytics, machine learning and more – to deliver targeted solutions. Operations: BFSI (Banking, Financial service & Insurance) IT (Product Based Organization) & Consulting, BPO Analytics: Acquired Inductis. 2000+ employees in analytics division. Ranked 2nd best Analytics organization across the world. 22 Offices in India : 6 in Noida, 3 in Gurgaon / Pune / Chennai / Kochi each and 1 in Jaipur / Bangalore / Hyderabad / Ahmedabad. Thanks and Regards, Rashmi K. Vishwakarma M : 9833965671 / 9321442718 rashmi@ultimatesearch.in Ultimate Search Pvt. Ltd. Life is too short for a wrong job
Posted 2 weeks ago
5.0 - 8.0 years
15 - 25 Lacs
hyderabad, chennai, bengaluru
Hybrid
Key Responsibilities: Deploy AWS/GCP-based infrastructure for AI API hosting, data storage, and asset retrieval. Optimize serverless computing (AWS Lambda, Cloud Run) for cost-efficient AI inference. Design scalable storage solutions for content archives and create schematics for PostgreSQL / Snowflake for structured metadata storage. Real-time monitoring & logging (Prometheus, Grafana, ELK Stack) for AI model performance. Fine-tuning AI models, improving output quality, correcting hallucinations Cloud Priority: Azure (VM setup and management) Required Skills 5+ years of dev-ops engineer experience in cloud architecture. Expertise in AWS (S3, Lambda, ECS, CloudFormation) or GCP (Cloud Run, Vertex AI). Experience with serverless computing and Kubernetes for scalable AI deployment. Proficiency in database management (PostgreSQL, Snowflake, NoSQL). Knowledge of API gateway management (FastAPI/Kong/Apigee). Additional Skills Should be task-oriented and adaptable to dynamic requirements Previous experience on AI models Familiarity with React or Next.js is a strong plus, but not mandatory
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
chandigarh
On-site
As a Machine Learning Engineer/Data Scientist, you will be responsible for developing, coding, and deploying advanced ML models with expertise in computer vision, object detection, object tracking, and NLP. Your primary responsibilities will include designing, testing, and deploying machine learning pipelines, developing computer vision algorithms using tools like Ultralytics YOLO and OpenCV, as well as building and optimizing NLP models for text analysis and sentiment analysis. You will also be utilizing Google Cloud services for model development and deployment, writing clean and efficient code, collaborating with cross-functional teams, and staying updated with the latest advancements in machine learning and AI. You should possess a Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field, along with at least 5 years of professional experience in data science and machine learning. Advanced proficiency in Python, TensorFlow, PyTorch, OpenCV, and Ultralytics YOLO is required, as well as hands-on experience in developing computer vision models and NLP solutions. Familiarity with cloud platforms, strong analytical skills, and excellent communication abilities are essential for this role. Preferred skills include experience with MLOps practices, CI/CD pipelines, big data technologies, distributed computing frameworks, contributions to open-source projects, and expertise in model interpretability, monitoring, and performance tuning in production environments. If you are passionate about machine learning, enjoy solving complex business challenges, and have a knack for developing cutting-edge models, this role is perfect for you.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
You will be expected to have hands-on project experience with GCP core products and services, including GCP Networking, VPCs, VPCSC, and Google Artefact Registry. It is essential to have extensive experience in Infrastructure as Code, including TF custom Modules and TF module Registry. Moreover, you should possess hands-on experience with GCP Data products such as Bigquery, Dataproc, Dataflow, and Vertex AI. Familiarity with Kubernetes and managing container Infrastructure, specifically GKE, is also required for this role. The role involves automation using programming languages like Python, Groovy, etc. Additionally, having an understanding of Infrastructure security, threat modeling, and Guardrails would be beneficial for this position.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
mumbai, maharashtra, india
On-site
Position: Team Manager Loan Analytics Function: Rural Business Finance Loan Analytics Location: Mumbai We are seeking accomplished professionals with knowledge in modeling, analytics, and financial services . The role involves integrating analytics into end-to-end business solutions and translating data-driven insights into actionable business outcomes. Key Responsibilities: Analyze large datasets to identify trends, correlations, and patterns related to loan performance, customer behavior, and market dynamics. Develop and maintain predictive models to assess credit risk, predict default probabilities, and optimize loan approval & collection processes. Collaborate with product teams to design and launch new loan products (Gold Loan, Business Loan, Rural Loan against properties, etc.). Conduct ad-hoc analyses for product development, pricing optimization, and market segmentation. Provide insights to improve underwriting criteria, loan origination processes, and customer targeting strategies. Stay updated with industry trends, regulatory changes, and best practices in rural business lending. Work with IT & data engineering teams to ensure data quality, integrity, and accessibility. Lead the development of model methodologies, algorithms, and diagnostic tools for testing robustness, sensitivity, and stability. Desired Skills & Experience: 610 years of experience in R, Python, PL/SQL (preferably MS SQL Server). Strong background in the BFSI domain (Banks, NBFCs, Microfinance). Good understanding of Micro Finance Credit Bureau data . Proficiency in statistics and tools like R, Python, SAS . Familiarity with SQL and relational databases. Ability to work with unstructured data and derive actionable insights. Experience with Google Cloud Platform, BigQuery, Vertex AI is a plus. Strong problem-solving, business acumen, and ability to link data mining to business impact. Knowledge of at least one advanced modeling technique: Logistic Regression, Linear Regression, Bayesian Modeling, Classification, Clustering, Neural Networks, or Multivariate Statistics. Show more Show less
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
The ideal candidate for this position should have 3+ years of experience in full stack software development, along with expertise in Cloud technologies & services, preferably GCP. In addition, the candidate should possess at least 3 years of experience in practicing statistical methods such as ANOVA and principal component analysis. Proficiency in Python, SQL, and BQ is a must. The candidate should also have experience in tools like SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google Cloud Build, Cloud Run, Vertex AI, Airflow, TensorFlow, etc. Experience in training, building, and deploying ML and DL models is an essential requirement for this role. Familiarity with HuggingFace, Chainlit, Streamlit, and React would be an added advantage. The position is based in Chennai and Bangalore. A minimum of 3 to 5 years of relevant experience is required for this role.,
Posted 2 weeks ago
4.0 - 7.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Line of Service Advisory Industry/Sector Not Applicable Specialism Operations Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in intelligent automation at PwC will focus on conducting process mining, designing next generation small- and large-scale automation solutions, and implementing intelligent process automation, robotic process automation and digital workflow solutions to help clients achieve operational efficiencies and reduce costs. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes forour clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences foreach other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: The Agentic Automation Engineer will develop and implement intelligent automation solutions using agentic frameworks like LangGraph, AutoGen, and CrewAI, integrating generative AI and Retrieval-Augmented Generation (RAG) techniques. This role requires deep expertise in Python, generative AI, and both open-source and closed-source LLMs, along with proficiency in databases and modern automation tools. The ideal candidate will collaborate with cross-functional teams to deliver scalable, high-impact automation workflows that enhance business processes. Responsibilities: Design and develop agentic automation workflows using frameworks such as LangGraph, AutoGen, CrewAI, and other multi-agent systems (e.g., MCP, A2A) to automate complex business processes. Build and optimize Retrieval-Augmented Generation (RAG) pipelines for enhanced contextual understanding and accurate response generation in automation tasks. Integrate open-source LLMs (e.g. LLaMA) and closed-source LLMs (e.g., OpenAI, Gemini, Vertex AI) to power agentic systems and generative AI applications. Develop robust Python-based solutions using libraries like LangChain, Transformers, Pandas, and PyTorch for automation and AI model development. Implement and manage CI/CD pipelines, Git workflows, and software development best practices to ensure seamless deployment of automation solutions. Work with structured and unstructured data, applying prompt engineering and fine-tuning techniques to enhance LLM performance for specific use cases. Query and manage databases (e.g., SQL, NoSQL) for data extraction, transformation, and integration into automation workflows. Collaborate with stakeholders to translate technical solutions into business value, delivering clear presentations and documentation. Stay updated on advancements in agentic automation, generative AI, and LLM technologies to drive innovation and maintain competitive edge. Ensure scalability, security, and performance of deployed automation solutions in production environments. Experience: o 4+ years of hands-on experience in AI/ML, generative AI, or automation development. o Proven expertise in agentic frameworks like LangGraph, AutoGen, CrewAI, and multi-agent systems. o Experience building and deploying RAG-based solutions for automation or knowledge-intensive applications. o Hands-on experience with open-source LLMs (Hugging Face) and closed-source LLMs (OpenAI, Gemini, Vertex AI). Technical Skills: o Advanced proficiency in Python and relevant libraries (LangChain, Transformers, Pandas, PyTorch, Scikit-learn). o Strong SQL skills for querying and managing databases (e.g., PostgreSQL, MongoDB). o Familiarity with CI/CD tools (e.g., Jenkins, GitHub Actions), Git workflows, and containerization (e.g., Docker, Kubernetes). o Experience with Linux (Ubuntu) and cloud platforms (AWS, Azure, Google Cloud) for deploying automation solutions. o Knowledge of automation tools (e.g., UiPath, Automation Anywhere) and workflow orchestration platforms. Soft Skills: o Exceptional communication skills to articulate technical concepts to non-technical stakeholders. o Strong problem-solving and analytical skills to address complex automation challenges. o Ability to work collaboratively in a fast-paced, client-facing environment. o Proactive mindset with a passion for adopting emerging technologies. Preferred Qualifications Experience with multi-agent coordination protocols (MCP) and agent-to-agent (A2A) communication systems. Familiarity with advanced generative AI techniques, such as prompt chaining, tool-augmented LLMs, and model distillation. Exposure to enterprise-grade automation platforms or intelligent process automation (IPA) solutions. Contributions to open-source AI/automation projects or publications in relevant domains. Certification in AI, cloud platforms, or automation technologies (e.g., AWS Certified AI Practitioner, RPA Developer). Mandatory skill sets: Agentic, LLM, RAG, AIML, LangGchain Preferred skill sets: Agentic, LLM, RAG, AIML, LangGchain, Gen AI Years of experience required: 4-7 Years Education qualification: B.Tech/MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills APA Style, Large Language Model (LLM) Fine-Tuning, Python (Programming Language) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Methodology, Analytical Thinking, Automation Algorithms, Automation Engineering, Automation Framework Design and Development, Automation Programming, Automation Solutions, Automation Studio, Automation System Efficiency, Blue Prism, Business Analysis, Business Performance Management, Business Process Analysis, Business Process Automation (BPA), Business Transformation, Business Value Optimization, C++ Programming Language, Cognitive Automation, Communication, Conducting Discovery, Configuration Management (CM), Continuous Process Improvement + 36 more Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship No Government Clearance Required No Job Posting End Date Show more Show less
Posted 2 weeks ago
3.0 - 6.0 years
20 - 30 Lacs
bengaluru
Work from Office
Job Summary: We are seeking a Senior AI Engineer with deep expertise in Google Cloud Platform (GCP) , particularly Vertex AI APIs , to design, develop, and deploy scalable AI/ML solutions. The ideal candidate is proficient in Python , experienced with FastAPI for building high-performance APIs, and has exposure to React UI for front-end integration. Key Responsibilities: Design, implement, and optimize AI/ML models leveraging GCP Vertex AI services . Develop robust RESTful APIs using FastAPI for model serving and integration. Deploy and manage AI solutions in production environments with strong focus on scalability and performance. Work closely with data scientists and ML engineers to productionize models. Integrate front-end solutions with APIs (React UI experience is a plus). Implement best practices for model versioning, monitoring, and MLOps pipelines on GCP. Collaborate with cross-functional teams to translate business needs into technical solutions. Required Skills & Qualifications: Mandate and Strong experience with GCP (Vertex AI, BigQuery,Dataflow,Bigtable). Expert-level Python programming skills for ML/AI development. Proven experience with FastAPI for building and deploying scalable APIs. Understanding of MLflow, TensorFlow, PyTorch , or similar frameworks. Knowledge of CI/CD pipelines for ML models. Familiarity with React UI or similar front-end technologies (preferred, not mandatory). Strong problem-solving and system design skills. Preferred Qualifications: Exposure to MLOps practices on cloud. Experience integrating AI solutions with enterprise applications. Knowledge of containerization (Docker, Kubernetes) for model deployment. Education: Bachelors or Masters degree in Computer Science, AI/ML, Data Science, or related field .
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
You are looking for a Senior ML Engineer (MLOps) with 3-5 years of experience to join the team in either Kochi or Chennai. In this role, you will be responsible for taming data by pulling, cleaning, and shaping structured & unstructured data. You will orchestrate pipelines using tools like Airflow, Step Functions, or ADF. Additionally, you will be involved in shipping models by building, tuning, and deploying them to production on platforms like SageMaker, Azure ML, or Vertex AI. Scaling using Spark or Databricks for heavy lifting and automating processes using Docker, Kubernetes, CI/CD, MLFlow, Seldon, and Kubeflow will also be a part of your responsibilities. Collaboration with engineers, architects, and business professionals to solve real problems efficiently is key in this role. To be successful in this position, you should have at least 3 years of hands-on MLOps experience with a total of 4-5 years in software development. Proficiency in one hyperscaler platform (AWS, Azure, or GCP), familiarity with Databricks, Spark, Python, SQL, and machine learning libraries like TensorFlow, PyTorch, or Scikit-learn is essential. Debugging Kubernetes, working with Dockerfiles, and prototyping with open-source tools are skills that are highly valued. A sharp mind, willingness to take action, and a collaborative attitude are qualities that will contribute to your success. Experience with Sagemaker, Azure ML, or Vertex AI in a production environment is a plus. A passion for writing clean code, maintaining clear documentation, and submitting concise pull requests is also appreciated. Joining Datadivr offers you the opportunity to work in a domain-specific environment focused on F&B, where your work directly impacts operations. As part of a small team, you will have significant autonomy and ownership over the projects you work on. To apply for this position, please submit your CV along with a brief description of a project you have shipped to careers@datadivr.com or via direct message. The company ensures a response to all serious applicants and welcomes referrals from those who know suitable candidates.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You should have expertise in ML/DL, model lifecycle management, and MLOps tools such as MLflow and Kubeflow. Proficiency in Python, TensorFlow, PyTorch, Scikit-learn, and Hugging Face models is essential. You must possess strong experience in NLP, fine-tuning transformer models, and dataset preparation. Hands-on experience with cloud platforms like AWS, GCP, Azure, and scalable ML deployment tools like Sagemaker and Vertex AI is required. Knowledge of containerization using Docker and Kubernetes, as well as CI/CD pipelines, is expected. Familiarity with distributed computing tools like Spark and Ray, vector databases such as FAISS and Milvus, and model optimization techniques like quantization and pruning is necessary. Additionally, you should have experience in model evaluation, hyperparameter tuning, and model monitoring for drift detection. As a part of your roles and responsibilities, you will be required to design and implement end-to-end ML pipelines from data ingestion to production. Developing, fine-tuning, and optimizing ML models to ensure high performance and scalability is a key aspect of the role. You will be expected to compare and evaluate models using key metrics like F1-score, AUC-ROC, and BLEU. Automation of model retraining, monitoring, and drift detection will be part of your responsibilities. Collaborating with engineering teams for seamless ML integration, mentoring junior team members, and enforcing best practices are also important aspects of the role. This is a full-time position with a day shift schedule from Monday to Friday. The total experience required for this role is 4 years, with at least 3 years of experience in Data Science roles. The work location is in person. Application Question: How soon can you join us ,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
bengaluru, karnataka, india
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn't changed - we're here to stop breaches, and we've redefined modern security with the world's most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day and this traffic is growing daily . Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We're also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We're always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters The future of cybersecurity starts with you. About the Role: The charter of the ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You'll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. Candidates must be comfortable to visit office once a week. What You'll Do: Help design, build, and facilitate adoption of a modern ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You'll Need: B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 10+ years related experience or M.S. with 8+ years of experience or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory) familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Critical Skills Needed for Role: Distributed Systems Knowledge Data Platform Experience Machine Learning concepts Experience with the Following is Desirable: Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC #LI-DP1 #LI-VJ1 Benefits of Working at CrowdStrike: Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Networks, geographic neighborhood groups, and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at for further assistance.
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
bengaluru, karnataka, india
Remote
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations. Since 2011, our mission hasn't changed - we're here to stop breaches, and we've redefined modern security with the world's most advanced AI-native platform. We work on large scale distributed systems, processing almost 3 trillion events per day and this traffic is growing daily . Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward. We're also a mission-driven company. We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers. We're always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other. Ready to join a mission that matters The future of cybersecurity starts with you. About the Role: The charter of the ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML Engineering and Insights Activation. This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company. We process data at a truly immense scale. Our processing is composed of various facets including threat events collected via telemetry data, associated metadata, along with IT asset information, contextual information about threat exposure based on additional processing, etc. These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse, built and owned by the Data Platform team. The ingestion mechanisms include both batch and near real-time streams that form the core Threat Analytics Platform used for insights, threat hunting, incident investigations and more. As an engineer in this team, you will play an integral role as we build out our ML Experimentation Platform from the ground up. You will collaborate closely with Data Platform Software Engineers, Data Scientists & Threat Analysts to design, implement, and maintain scalable ML pipelines that will be used for Data Preparation, Cataloging, Feature Engineering, Model Training, and Model Serving that influence critical business decisions. You'll be a key contributor in a production-focused culture that bridges the gap between model development and operational success. Future plans include generative AI investments for use cases such as modeling attack paths for IT assets. Candidates must be comfortable to visit office once a week. What You'll Do: Help design, build, and facilitate adoption of a modern ML platform Modularize complex ML code into standardized and repeatable components Establish and facilitate adoption of repeatable patterns for model development, deployment, and monitoring Build a platform that scales to thousands of users and offers self-service capability to build ML experimentation pipelines Leverage workflow orchestration tools to deploy efficient and scalable execution of complex data and ML pipelines Review code changes from data scientists and champion software development best practices Leverage cloud services like Kubernetes, blob storage, and queues in our cloud first environment What You'll Need: B.S. in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field and 10+ years related experience or M.S. with 8+ years of experience or Ph.D with 6+ years of experience. 3+ years experience developing and deploying machine learning solutions to production. Familiarity with typical machine learning algorithms from an engineering perspective (how they are built and used, not necessarily the theory) familiarity with supervised / unsupervised approaches: how, why, and when and labeled data is created and used 3+ years experience with ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI etc. Experience building data platform product(s) or features with (one of) Apache Spark, Flink or comparable tools in GCP. Experience with Iceberg is highly desirable. Proficiency in distributed computing and orchestration technologies (Kubernetes, Airflow, etc.) Production experience with infrastructure-as-code tools such as Terraform, FluxCD Expert level experience with Python Java/Scala exposure is recommended. Ability to write Python interfaces to provide standardized and simplified interfaces for data scientists to utilize internal Crowdstrike tools Expert level experience with CI/CD frameworks such as GitHub Actions Expert level experience with containerization frameworks Strong analytical and problem solving skills, capable of working in a dynamic environment Exceptional interpersonal and communication skills. Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Critical Skills Needed for Role: Distributed Systems Knowledge Data Platform Experience Machine Learning concepts Experience with the Following is Desirable: Go Iceberg Pinot or other time-series/OLAP-style database Jenkins Parquet Protocol Buffers/GRPC #LI-DP1 #LI-VJ1 Benefits of Working at CrowdStrike: Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role Employee Networks, geographic neighborhood groups, and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified across the globe CrowdStrike is proud to be an equal opportunity employer. We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed. We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment. The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law. We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at for further assistance.
Posted 2 weeks ago
11.0 - 13.0 years
55 - 90 Lacs
hyderabad
Work from Office
........... . Job Position - Solution Architect (AI & Product Engineering) Experience Required - 7-10 years Job Type - Full time Location - Hyderabad, India Overview We are seeking a visionary leader with extensive experience in AI-based product innovation and development to drive the integration of advanced AI, Generative AI, and Multimodal AI systems into innovative solutions. This role requires a deep understanding of AI ecosystems to build strategic partnerships and onboard top talent while also possessing expertise in patent filing to protect intellectual property. Role and Responsibilities AI AI/ML Solution Architecture & Design Design comprehensive AI solution architectures that are scalable, maintainable, and aligned with business objectives Create detailed architectural diagrams, technical specifications, and solution blueprints Evaluate and recommend AI frameworks, models, and tools (e.g., TensorFlow, PyTorch, LangChain (AutoML). Define data strategies for AI model training, including data pipelines, storage, governance, and feature engineering . Optimize data processing workflows for real-time and batch AI applications. Cloud & Infrastructure Architect and deploy AI solutions on GCP, AWS, Azure, or OCI, optimizing for cost, performance, and scalability. Implement containerized AI workloads (Docker, Kubernetes) and serverless AI solutions. Design high-availability, fault-tolerant, and secure AI infrastructure. Data Architecture & Management Design and implement data architectures that support effective AI model deployment Define data pipelines, storage strategies, and data governance frameworks Establish data lineage and metadata management practices while integrating big data technologies (Spark, Databricks, Snowflake) for AI-driven analytics. MLOps & DevOps Build end-to-end MLOps pipelines (MLflow, Kubeflow, SageMaker, Vertex AI) for model lifecycle management. Implement CI/CD for AI models, including automated testing, versioning, and rollback. Design monitoring, logging, and alerting for AI models in production. Technical Leadership Lead the development of end-to-end AI solutions from conception to deployment Collaborate with cross-functional teams, including data scientists, engineers, and product managers Provide technical guidance and mentorship to development teams Conduct architectural reviews and ensure adherence to design principles Performance & Optimization Monitor and optimize AI system performance, latency, and resource utilization Implement cost optimization strategies for AI infrastructure and operations Conduct capacity planning and scaling strategies for AI workloads Troubleshoot and resolve complex AI system issues Innovation & Research Stay current with emerging AI technologies, frameworks, and industry trends Evaluate and pilot new AI tools and technologies for potential adoption Drive innovation initiatives and proof-of-concept development Participate in AI community events and knowledge sharing Stakeholder Engagement Present technical solutions and architectural designs to stakeholders and leadership Collaborate effectively with business teams, sales, and business development teams to support pre-sales activities, including presentations and demonstrations, to understand requirements and constraints. Facilitate technical discussions and decision-making processes Experience Requirement 7+ years of total professional experience in technology and solution architecture 2+ years of hands-on experience in AI/ML solution development and implementation 5+ years of experience working with cloud platforms (GCP, AWS, Azure, OCI) Technical Skills AI/ML Frameworks: TensorFlow, PyTorch, Hugging Face, LangChain, AutoML. Cloud AI Services: SageMaker, Vertex AI, Azure ML, Bedrock, LLM APIs. MLOps Tools: MLflow, Kubeflow, TFX, Docker, Kubernetes, CI/CD pipelines. Data Engineering: Spark, Databricks, BigQuery, Snowflake, Airflow. Programming: Python (advanced), Java/Scala, SQL, PySpark. Agentic AI: RPA tools (UiPath, Automation Anywhere), process mining. Hands-on experience with CI/CD pipelines and DevOps practices Preferred Qualifications Master's degree in Computer Science, Engineering, or related technical field Certifications in cloud platforms (GCP Professional Cloud Architect, AWS Solutions Architect, Azure Solutions Architect) Experience with real-time AI applications and streaming data processing Knowledge of AI ethics, bias detection, and responsible AI practices Experience with microservices architecture and API design Strong programming skills in Python, TensorFlow, PyTorch, scikit-learn, and other relevant AI development tools and frameworks. Soft skill Excellent communication and presentation skills Strong collaborative mindset with ability to work across diverse teams Leadership capabilities with experience mentoring technical team members Problem-solving skills with the ability to think strategically and analytically Adaptability to work in a fast-paced, evolving technology environment
Posted 2 weeks ago
3.0 - 8.0 years
8 - 12 Lacs
bengaluru, karnataka, india
On-site
AGCO is looking to hire candidates for the position of Systems Architect, AI Systems We are seeking an experienced and innovative Systems Architect, AI Systems to lead the design, creation, and evolution of system architectures for AI solutions within our organization The Systems Architect, AI Systems, is a critical role responsible for designing, implementing, and evolving scalable AI architectures that support and drive the company's AI and data strategies The ideal candidate will have extensive experience in architecting, deploying, and managing AI solutions in cloud-based environments, with a focus on high performance, security, and interoperability This role will work closely with data science, engineering, and delivery teams to ensure AI systems are seamlessly integrated across the organizations operations Your Impact AI Systems Architecture: Design and implement scalable architectures that support AI/ML models and data systems, optimizing for performance, cost-efficiency, and security Cloud Platform Expertise: Leverage cloud environments (AWS, GCP, Azure) to architect solutions that support high-volume data processing, storage, and AI model deployment Data Integration & Pipeline Design: Develop and oversee data pipelines that ensure smooth data flow between various sources and AI systems, incorporating best practices for ETL, data lakes, and real-time streaming Model Deployment and Monitoring: Collaborate with ML engineers to establish CI/CD pipelines for model deployment, monitoring, and lifecycle management to ensure models remain reliable and efficient in production Tool Evaluation and Adoption: Evaluate and recommend AI tools, libraries, and platforms that align with business goals, ensuring that the chosen technology stack can support complex AI and data requirements Security and Compliance: Ensure all AI architectures meet security, regulatory, and compliance standards relevant to data handling, model deployment, and user access, particularly with regards to GDPR, CPRA, and industry-specific regulations Documentation and Best Practices: Establish and maintain comprehensive documentation of architecture designs, policies, and processes, promoting best practices across the AI & Data Operations organization Functional Knowledge Advanced AI and Data Systems: Deep understanding of AI architectures, including distributed computing frameworks (e g, Apache Spark, Hadoop), data lakes, data warehousing, and MLOps Cloud Infrastructure and Services: Proficient in AWS (SageMaker, Redshift, Lambda), Google Cloud Platform (BigQuery, Vertex AI), and/or Azure AI services, with a strong grasp of cloud-based data management and ML lifecycle management Containerization and Orchestration: Skilled in Docker, Kubernetes, and other container management tools to ensure scalability and efficiency in model deployment and orchestration Data Security and Governance: Familiar with data governance frameworks and security protocols, including encryption, authentication, and compliance requirements (GDPR, CPRA, etc ), especially as they relate to cloud-based AI solutions Business Expertise Experience in industries such as manufacturing, agriculture, or supply chain, particularly in AI and data use cases Familiarity with regulatory requirements related to data governance and security Experience with emerging technologies like edge computing, IoT, and AI/ML automation tools Your Experience And Qualifications Bachelors degree in computer science, data science, or related field Masters degree or relevant certifications is preferred Experience: 8+ years of experience in systems architecture, with at least 5 years focused on AI and data systems in a cloud environment Programming Languages: Proficient in Python, Java, and SQL; familiarity with R, Scala, or other languages is a plus MLOps Tools: Hands-on experience with MLOps tools and frameworks (e g, MLflow, Kubeflow, Airflow) for model tracking, versioning, and deployment Data Engineering: Strong experience with ETL processes, real-time data streaming (Kafka, Spark Streaming), and data pipeline design Analytical Skills: Strong problem-solving abilities with the capacity to analyze system requirements and design scalable solutions that meet both technical and business needs Communication and Leadership: Excellent communication skills to explain complex architecture concepts to non-technical stakeholders, as well as the ability to lead technical discussions and mentor junior team members Certifications: Relevant certifications are a plus (e g, AWS Certified Solutions Architect, Google Professional Data Engineer, Certified Kubernetes Administrator) Your Benefits
Posted 2 weeks ago
8.0 - 11.0 years
0 Lacs
pune, maharashtra, india
Remote
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, we transform ideas into impact by bringing together data, science, technology and human ingenuity to deliver better outcomes for all. Here you'll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client-first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning, bold ideas, courage and passion to drive life-changing impact to ZS. Key Responsibilities: Lead the end-to-end technical architecture for consumer-facing digital platforms, ensuring robustness, scalability, and reusability (e.g., pricing engines, agent assist call center) Design and implement integration architectures connecting native apps, modern SaaS platforms, and legacy systems (e.g., PMS, ERP, booking engines). Define non-functional architecture requirements such as performance tuning, failover, caching, and throughput optimization to handle peak traffic loads. Establish architectural patterns (e.g., microservices, event-driven systems, API gateways) and code-level guardrails across teams. Architect foundational capabilities to support AI/Agentic applications - including context management, modular services, and workflow orchestration layers. Evaluate, integrate, and advise on enabling GenAI-based features (e.g., content generation, summarization, conversational interfaces). Review solution implementations for code quality, technical debt, and adherence to architectural standards. Lead cross-functional teams to deliver technical prototypes and scalable MVPs that demonstrate business value quickly. Provide technical mentorship to engineering teams across development, DevOps, and testing. Contribute to AI/GenAI strategy by identifying integration points for LLMs or NLP tools (e.g., content generation, knowledge retrieval, smart workflows). Preferred Experience: 8-11 years of experience in backend architecture, enterprise integration, or platform engineering, with a strong track record of designing scalable and high-performance systems. Deep hands-on expertise in cloud-native development (preferably AWS), distributed systems, and containerization technologies such as Docker and Kubernetes. Strong experience in microservices and event-driven architecture, with a focus on performance tuning, caching strategies, and system reliability under peak loads. Proven ability to integrate modern digital platforms (e.g., headless CMS, mobile SDKs) with legacy systems such as PMS, POS, and ERP, ensuring seamless interoperability. Solid grounding in performance engineering, including load balancing, latency reduction, and resilient system design. Exposure to GenAI/LLM integration using tools like OpenAI, Vertex AI, LangChain, or Retrieval-Augmented Generation (RAG) frameworks is a strong plus. Familiarity with personalization engines and/or platforms (e.g., Oracle, Adobe, Salesforce), and omnichannel architecture relevant to Travel, Hospitality, or CPG domains. Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying At ZS, we honor the visible and invisible elements of our identities, personal experiences, and belief systems-the ones that comprise us as individuals, shape who we are, and make us unique. We believe your personal interests, identities, and desire to learn are integral to your success here. We are committed to building a team that reflects a broad variety of backgrounds, perspectives, and experiences. about our inclusion and belonging efforts and the networks ZS supports to assist our ZSers in cultivating community spaces and obtaining the resources they need to thrive. If you're eager to grow, contribute, and bring your unique self to our work, we encourage you to apply. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To complete your application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At:
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
india
On-site
The Company PayPal has been revolutionizing commerce globally for more than 25 years. Creating innovative experiences that make moving money, selling, and shopping simple, personalized, and secure, PayPal empowers consumers and businesses in approximately 200 markets to join and thrive in the global economy. We operate a global, two-sided network at scale that connects hundreds of millions of merchants and consumers. We help merchants and consumers connect, transact, and complete payments, whether they are online or in person. PayPal is more than a connection to third-party payment networks. We provide proprietary payment solutions accepted by merchants that enable the completion of payments on our platform on behalf of our customers. We offer our customers the flexibility to use their accounts to purchase and receive payments for goods and services, as well as the ability to transfer and withdraw funds. We enable consumers to exchange funds more safely with merchants using a variety of funding sources, which may include a bank account, a PayPal or Venmo account balance, PayPal and Venmo branded credit products, a credit card, a debit card, certain cryptocurrencies, or other stored value products such as gift cards, and eligible credit card rewards. Our PayPal, Venmo, and Xoom products also make it safer and simpler for friends and family to transfer funds to each other. We offer merchants an end-to-end payments solution that provides authorization and settlement capabilities, as well as instant access to funds and payouts. We also help merchants connect with their customers, process exchanges and returns, and manage risk. We enable consumers to engage in cross-border shopping and merchants to extend their global reach while reducing the complexity and friction involved in enabling cross-border trade. Our beliefs are the foundation for how we conduct business every day. We live each day guided by our core values of Inclusion, Innovation, Collaboration, and Wellness. Together, our values ensure that we work together as one global team with our customers at the center of everything we do - and they push us to ensure we take care of ourselves, each other, and our communities. Job Summary: We are seeking a highly skilled Senior Data Scientist with a deep understanding of advanced machine learning techniques and frameworks. The ideal candidate will have extensive experience in Python, graph neural networks, and scalable machine learning pipelines. If you are passionate about developing innovative solutions and driving impactful projects, we want to hear from you! This job will lead the development and implementation of advanced data science models and algorithms. You will work with stakeholders to understand requirements and deliver solutions. Your role involves driving best practices in data science, ensuring data quality, and mentoring junior scientists. Job Description: Essential Responsibilities: Lead the development and implementation of advanced data science models. Collaborate with stakeholders to understand requirements. Drive best practices in data science. Ensure data quality and integrity in all processes. Mentor and guide junior data scientists. Stay updated with the latest trends in data science. Minimum Qualifications: Minimum of 5 years of relevant work experience and a Bachelor's degree or equivalent experience. Preferred Qualification: 5+ years experience in developing and optimizing machine learning models using Python with TensorFlow, Keras, and/or PyTorch. Expertise in Graph Neural Networks (GNNs) for node and link prediction, graph embedding, and graph-based classification. Proven experience in customer segmentation and/or recommendation systems tailored to client needs. Work with transformer models (e.g., BERT, GPT) in real applications, particularly within GenAI workflows. 5+ years experience in building and maintaining scalable ML pipelines using tools such as Kubeflow, MLFlow, Vertex AI, and SageMaker. 5+ years experience in processing and analyzing very large datasets using Spark or similar frameworks. Working with vector databases (e.g., Pinecone) for embedding-based applications including search and similarity tasks. Utilize GCP Services to deploy models at scale, particularly BigQuery and Vertex AI. Experience with GraphRAG and Knowledge Augmented Generation (KAG). Knowledge of Deep Neural Networks, Multi-task Learning (MTL) and AdTech. Subsidiary: PayPal Travel Percent: 0 - PayPal is committed to fair and equitable compensation practices. Actual Compensation is based on various factors including but not limited to work location, and relevant skills and experience. The total compensation for this practice may include an annual performance bonus (or other incentive compensation, as applicable), equity, and medical, dental, vision, and other benefits. For more information, visit . The US national annual pay range for this role is $123,500 to $212,850 P ayPal does not charge candidates any fees for courses, applications, resume reviews, interviews, background checks, or onboarding. Any such request is a red flag and likely part of a scam. To learn more about how to identify and avoid recruitment fraud please visit . For the majority of employees, PayPal's balanced hybrid work model offers 3 days in the office for effective in-person collaboration and 2 days at your choice of either the PayPal office or your home workspace, ensuring that you equally have the benefits and conveniences of both locations. Our Benefits: At PayPal, we're committed to building an equitable and inclusive global economy. And we can't do this without our most important asset-you. That's why we offer benefits to help you thrive in every stage of life. We champion your financial, physical, and mental health by offering valuable benefits and resources to help you care for the whole you. We have great benefits including a flexible work environment, employee shares options, health and life insurance and more. To learn more about our benefits please visit . Who We Are: to learn more about our culture and community. Commitment to Diversity and Inclusion PayPal provides equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, pregnancy, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state, or local law. In addition, PayPal will provide reasonable accommodations for qualified individuals with disabilities. If you are unable to submit an application because of incompatible assistive technology or a disability, please contact us at . Belonging at PayPal: Our employees are central to advancing our mission, and we strive to create an environment where everyone can do their best work with a sense of purpose and belonging. Belonging at PayPal means creating a workplace with a sense of acceptance and security where all employees feel included and valued. We are proud to have a diverse workforce reflective of the merchants, consumers, and communities that we serve, and we continue to take tangible actions to cultivate inclusivity and belonging at PayPal. Any general requests for consideration of your skills, please . We know the confidence gap and imposter syndrome can get in the way of meeting spectacular candidates. Please don't hesitate to apply. Notice to Applicants and Employees who reside within New York City. Click to view the notice.
Posted 2 weeks ago
3.0 - 6.0 years
5 - 8 Lacs
hyderabad
Work from Office
Programming: Python, R, Julia, SQL - Frameworks: TensorFlow, PyTorch, Scikit-Learn, Keras - Tools: Jupyter Notebooks, Vertex AI, AutoML, MLflow - Data Processing: BigQuery, Pandas, NumPy, Dask - Visualization: Matplotlib, Seaborn, Tableau, Power BI - Cloud: GCP AI Platform, Vertex AI, Cloud ML Engine - MLOps: Kubeflow, Vertex Pipelines, Docker, Kubernetes
Posted 2 weeks ago
3.0 - 6.0 years
5 - 8 Lacs
mumbai suburban
Work from Office
Programming: Python, R, Julia, SQL - Frameworks: TensorFlow, PyTorch, Scikit-Learn, Keras - Tools: Jupyter Notebooks, Vertex AI, AutoML, MLflow - Data Processing: BigQuery, Pandas, NumPy, Dask - Visualization: Matplotlib, Seaborn, Tableau, Power BI - Cloud: GCP AI Platform, Vertex AI, Cloud ML Engine - MLOps: Kubeflow, Vertex Pipelines, Docker, Kubernetes
Posted 2 weeks ago
3.0 - 6.0 years
5 - 8 Lacs
mumbai
Work from Office
Programming: Python, R, Julia, SQL - Frameworks: TensorFlow, PyTorch, Scikit-Learn, Keras - Tools: Jupyter Notebooks, Vertex AI, AutoML, MLflow - Data Processing: BigQuery, Pandas, NumPy, Dask - Visualization: Matplotlib, Seaborn, Tableau, Power BI - Cloud: GCP AI Platform, Vertex AI, Cloud ML Engine - MLOps: Kubeflow, Vertex Pipelines, Docker, Kubernetes
Posted 2 weeks ago
6.0 - 9.0 years
8 - 11 Lacs
hyderabad
Work from Office
Programming: Python, R, Julia, SQL - Frameworks: TensorFlow, PyTorch, Scikit-Learn, Keras - Tools: Jupyter Notebooks, Vertex AI, AutoML, MLflow - Data Processing: BigQuery, Pandas, NumPy, Dask - Visualization: Matplotlib, Seaborn, Tableau, Power BI - Cloud: GCP AI Platform, Vertex AI, Cloud ML Engine - MLOps: Kubeflow, Vertex Pipelines, Docker, Kubernetes
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |