Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 7.0 years
11 - 21 Lacs
Pune
Hybrid
Rapid7, a global cybersecurity company, is expanding its AI Centre of Excellence in India. We seek a Senior AI Engineer (MLOps) to build and manage MLOps infrastructure, deploy ML models, and support AI-powered threat detection systems. Work Location: Amar Tech Park Balewadi - Hinjawadi Rd, Patil Nagar, Balewadi, Pune, Maharashtra 411045 Key Responsibilities: Build and deploy ML/LLM models in AWS using Sagemaker, Terraform. Develop APIs/interfaces using Python, TypeScript, FastAPI/Flask. Manage data pipelines, model lifecycle, observability, and guardrails. Collaborate with cross-functional teams; follow agile and DevOps best practices. Requirements: 5+ years in software engineering, 3-5 years in ML deployment (AWS). Proficient in Python, TypeScript, Docker, Kubernetes, CI/CD. Experience with LLMs, GPU resources, and ML monitoring. Nice to Have: NLP, model risk management, scalable ML systems. Rapid7 values innovation, diversity, and ethical AIideal for engineers seeking impact in cybersecurity.
Posted 2 months ago
15 - 24 years
20 - 35 Lacs
Kochi, Chennai, Thiruvananthapuram
Work from Office
Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch . Develop Infrastructure as Code (IaC) using Terraform , and automate deployments with CI/CD pipelines . Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases . Implement frameworks for bias detection, explainability , and responsible AI . Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards , and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring , alerting , and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills: 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services : EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git , and CI/CD pipelines . Proven track record in delivering AI/ML projects into production environments . Deep understanding of MLOps, model versioning, monitoring , and retraining pipelines . Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills: AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain , LLMs , Kubeflow , or GCP-based AI services .
Posted 2 months ago
11 - 14 years
35 - 50 Lacs
Chennai
Work from Office
Role: MLOps Engineer Location: PAN India Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.
Posted 2 months ago
4 - 6 years
18 - 20 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
POSITION: MLOps Engineer LOCATION: Bangalore (Hybrid) Work timings - 12 pm - 9 pm Budget - Maximum 20 LPA ROLE OBJECTIVE The MLOps Engineer position will support various segments by enhancing and optimizing the deployment and operationalization of machine learning models. The primary objective is to collaborate with data scientists, data engineers, and business stakeholders to ensure efficient, scalable, and reliable ML model deployment and monitoring. The role involves integrating ML models into production systems, automating workflows, and maintaining robust CI/CD pipelines. RESPONSIBILITIES Model Deployment and Operationalization : Implement, manage, and optimize the deployment of machine learning models into production environments. CI/CD Pipelines: Develop and maintain continuous integration and continuous deployment pipelines to streamline the deployment process of ML models. Infrastructure Management: Design and manage scalable, reliable, and secure cloud infrastructure for ML workloads using platforms like AWS and Azure. Monitoring and Logging: Implement monitoring, logging, and alerting mechanisms to ensure the performance and reliability of deployed models. Automation: Automate ML workflows, including data preprocessing, model training, validation, and deployment using tools like Kubeflow, MLflow, and Airflow. Collaboration: Work closely with data scientists, data engineers, and business stakeholders to understand requirements and deliver solutions. Security and Compliance : Ensure that ML models and data workflows comply with security, privacy, and regulatory requirements. Performance Optimization : Optimize the performance of ML models and the underlying infrastructure for speed and cost-efficiency. EXPERIENCE Years of Experience: 4-6 years of experience in ML model deployment and operationalization. Technical Expertise : Proficiency in Python, Azure ML, AWS Sagemaker, and other ML tools and frameworks. Cloud Platforms: Extensive experience with cloud platforms such as AWS and Azure Cloud Platform. Containerization and Orchestration: Hands-on experience with Docker and Kubernetes for containerization and orchestration of ML workloads. EDUCATION/KNOWLEDGE Educational Qualification : Master's degree (preferably in Computer Science) or B.Tech / B.E. Domain Knowledge: Familiarity with EMEA business operations is a plus. OTHER IMPORTANT NOTES Flexible Shifts : Must be willing to work flexible shifts. Team Collaboration: Experience with team collaboration and cloud tools. Algorithm Building and Deployment : Proficiency in building and deploying algorithms using Azure/AWS platforms. Please share the following details along with the most updated resume to geeta.negi@compunnel.com if you are interested in the opportunity: Total Experience Relevant experience Current CTC Expected CTC Notice Period (Last working day if you are serving the notice period) Current Location SKILL 1 RATING OUT OF 5 SKILL 2 RATING OUT OF 5 SKILL 3 RATING OUT OF 5 (Mention the skill)
Posted 2 months ago
5 - 10 years
25 - 30 Lacs
Mumbai, Navi Mumbai, Chennai
Work from Office
We are looking for an AI Engineer (Senior Software Engineer). Interested candidates email me resumes on mayura.joshi@lionbridge.com OR WhatsApp on 9987538863 Responsibilities: Design, develop, and optimize AI solutions using LLMs (e.g., GPT-4, LLaMA, Falcon) and RAG frameworks. Implement and fine-tune models to improve response relevance and contextual accuracy. Develop pipelines for data retrieval, indexing, and augmentation to improve knowledge grounding. Work with vector databases (e.g., Pinecone, FAISS, Weaviate) to enhance retrieval capabilities. Integrate AI models with enterprise applications and APIs. Optimize model inference for performance and scalability. Collaborate with data scientists, ML engineers, and software developers to align AI models with business objectives. Ensure ethical AI implementation, addressing bias, explainability, and data security. Stay updated with the latest advancements in generative AI, deep learning, and RAG techniques. Requirements: 8+ years experience in software development according to development standards. Strong experience in training and deploying LLMs using frameworks like Hugging Face Transformers, OpenAI API, or LangChain. Proficiency in Retrieval-Augmented Generation (RAG) techniques and vector search methodologies. Hands-on experience with vector databases such as FAISS, Pinecone, ChromaDB, or Weaviate. Solid understanding of NLP, deep learning, and transformer architectures. Proficiency in Python and ML libraries (TensorFlow, PyTorch, LangChain, etc.). Experience with cloud platforms (AWS, GCP, Azure) and MLOps workflows. Familiarity with containerization (Docker, Kubernetes) for scalable AI deployments. Strong problem-solving and debugging skills. Excellent communication and teamwork abilities Bachelors or Masters degree in computer science, AI, Machine Learning, or a related field.
Posted 2 months ago
12 - 22 years
50 - 55 Lacs
Hyderabad, Gurugram
Work from Office
Job Summary Director, Collection Platforms and AI As a director, you will be essential to drive customer satisfaction by delivering tangible business results to the customers. You will be working for the Enterprise Data Organization and will be an advocate and problem solver for the customers in your portfolio as part of the Collection Platforms and AI team. You will be using communication and problem-solving skills to support the customer on their automation journey with emerging automation tools to build and deliver end to end automation solutions for them. Team Collection Platforms and AI Enterprise Data Organizations objective is to drive growth across S&P divisions, enhance speed and productivity in our operations, and prepare our data estate for the future, benefiting our customers. Therefore, automation represents a massive opportunity to improve quality and efficiency, to expand into new markets and products, and to create customer and shareholder value. Agentic automation is the next frontier in intelligent process evolution, combining AI agents, orchestration layers, and cloud-native infrastructure to enable autonomous decision-making and task execution. To leverage the advancements in automation tools, its imperative to not only invest in the technologies but also democratize them, build literacy, and empower the work force. The Collection Platforms and AI team's mission is to drive this automation strategy across S&P Global and help create a truly digital workplace. We are responsible for creating, planning, and delivering transformational projects for the company using state of the art technologies and data science methods, developed either in house or in partnership with vendors. We are transforming the way we are collecting the essential intelligence our customers need to do decision with conviction, delivering it faster and at scale while maintaining the highest quality standards. What were looking for ? You will lead the design, development, and scaling of AI-driven agentic pipelines to transform workflows across S&P Global. This role requires a strategic leader who can architect end-to-end automation solutions using agentic frameworks, cloud infrastructure, and orchestration tools while managing senior stakeholders and driving adoption at scale. A visionary technical leader with knowledge of designing agentic pipelines and deploying AI applications in production environments. Understanding of cloud infrastructure (AWS/Azure/GCP), orchestration tools (e.g., Airflow, Kubeflow), and agentic frameworks (e.g., LangChain, AutoGen). Proven ability to translate business workflows into automation solutions, with emphasis on financial/data services use cases. An independent proactive person who is innovative, adaptable, creative, and detailed-oriented with high energy and a positive attitude. Exceptional skills in listening to clients, articulating ideas, and complex information in a clear and concise manner. Proven record of creating and maintaining strong relationships with senior members of client organizations, addressing their needs, and maintaining a high level of client satisfaction. Ability to understand what the right solution is for all type of problems, understanding and identifying the ultimate value of each project. Operationalize this technology across S&P Global, delivering scalable solutions that enhance efficiency, reduce latency, and unlock new capabilities for internal and external clients. Exceptional communication skills with experience presenting to C-level executives Responsibilities Engage with the multiple client areas (external and internal) and truly understand their problem and then deliver and support solutions that fit their needs. Understand the existing S&P Global product to leverage existing products as necessary to deliver a seamless end to end solution to the client. Evangelize agentic capabilities through workshops, demos, and executive briefings. Educate and spread awareness within the external client-base about automation capabilities to increase usage and idea generation. Increase automation adoption by focusing on distinct users and distinct processes. Deliver exceptional communication to multiple layers of management for the client. Provide automation training, coaching, and assistance specific to a users role. Demonstrate strong working knowledge of automation features to meet evolving client needs. Extensive knowledge and literacy of the suite of products and services offered through ongoing enhancements, and new offerings and how they fulfill customer needs. Establish monitoring frameworks for agent performance, drift detection, and self-healing mechanisms. Develop governance models for ethical AI agent deployment and compliance. Preferred Qualification 12+ years work experience with 5+ years in the Automation/AI space Knowledge of: Cloud platforms (AWS SageMaker, Azure ML; etc) Orchestration tools (Prefect, Airflow; etc) Agentic toolkits (LangChain, LlamaIndex, AutoGen) Experience in productionizing AI applications. Strong programming skills in python and common AI frameworks Experience with multi-modal LLMs and integrating vision and text for autonomous agents. Excellent written and oral communication in English Excellent presentation skills with a high degree of comfort speaking with senior executives, IT Management, and developers. Hands-on ability to build quick prototype/visuals to assist with high level product concepts and capabilities. Experience in deployment and management of applications utilizing cloud-based infrastructure. A desire to work in a fast-paced and challenging work environment Ability to work in a cross functional, multi geographic teams
Posted 2 months ago
3 - 7 years
4 - 7 Lacs
Hyderabad
Work from Office
What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and implementing data governance initiatives and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes . Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in design and development of the data pipeline. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Collaborate with multi-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions. Identify and resolve complex data-related challenges. Adhere to standard processes for coding, testing, and designing reusable code/component. Explore new tools and technologies that will help to improve ETL platform performance. Participate in sprint planning meetings and provide estimations on technical implementation. Collaborate and communicate effectively with product teams. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree with 4 - 6 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 6 - 8 years of experience in Computer Science, IT or related field OR Diploma with 10 - 12 years of experience in Computer Science, IT or related field. Functional Skills: Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing. Hands on experience with various Python/R packages for EDA, feature engineering and machine learning model training. Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Excellent problem-solving skills and the ability to work with large, complex datasets. Strong understanding of data governance frameworks, tools, and standard methodologies. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA). Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Strong understanding of data modeling, data warehousing, and data integration concepts. Knowledge of Python/R, Databricks, SageMaker, OMOP. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Certified Data Scientist (preferred on Databricks or Cloud environments). Machine Learning Certification (preferred on Databricks or Cloud environments). SAFe for Teams certification (preferred). Soft Skills: Excellent critical-thinking and problem-solving skills. Strong communication and collaboration skills. Demonstrated awareness of how to function in a team setting. Demonstrated presentation skills. Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 months ago
3 - 6 years
6 - 10 Lacs
Chennai
Work from Office
Role Summary As part of our AI-first strategy at Creatrix Campus , you'll play a critical role in deploying, optimizing, and maintaining Large Language Models (LLMs) like LLaMA, Mistral, and CodeS across our SaaS platform. This role is not limited to experimentationit is about operationalizing AI at scale. Youll ensure our AI services are reliable, secure, cost-effective, and product-ready for higher education institutions in 25+ countries. Youll work across infrastructure (cloud and on-prem), MLOps, and performance optimization while collaborating with software engineers, AI developers, and product teams to embed LLMs into real-world applications like accreditation automation, intelligent student forms, and predictive academic advising. Key Responsibilities LLM Deployment & Optimization Deploy, fine-tune, and optimize open-source LLMs (e.g., LLaMA, Mistral, CodeS, DeepSeek). Implement quantization (e.g., 4-bit, 8-bit) and pruning for efficient inference on commodity hardware. Build and manage inference APIs (REST/gRPC) for production use. Infrastructure Management Set up and manage on-premise GPU servers and VM-based deployments. Build scalable cloud-based LLM infrastructure using AWS (SageMaker, EC2), Azure ML, or GCP Vertex AI. Ensure cost efficiency by choosing appropriate hardware and job scheduling strategies. MLOps & Reliability Engineering Develop CI/CD pipelines for model training, testing, evaluation, and deployment. Integrate version control for models, data, and hyperparameters. Set up logging, tracing, and monitoring tools (e.g., MLflow, Prometheus, Grafana) for model performance and failure detection. Security, Compliance & Performance Ensure data privacy (FERPA/GDPR) and enforce security best practices across deployments. Apply secure coding standards and implement RBAC, encryption, and network hardening for cloud/on-prem. Cross-functional Integration Work closely with AI solution engineers, backend developers, and product owners to integrate LLM services into the platform. Support performance benchmarking and A/B testing of AI features across modules Documentation & Internal Enablement Document LLM pipelines, configuration steps, and infrastructure setup in internal playbooks. Create guides and reusable templates for future deployments and models. Required Qualifications Education: Bachelors or Masters in Computer Science, AI/ML, Data Engineering, or related field. Technical Skills: Strong Python experience with ML libraries (e.g., PyTorch, Hugging Face Transformers). Familiar with LangChain, LlamaIndex, or other RAG frameworks. Experience with Docker, Kubernetes, and API gateways (e.g., Kong, NGINX). Working knowledge of vector databases (FAISS, Pinecone, Qdrant). Familiarity with GPU deployment tools (CUDA, Triton Inference Server, HuggingFace Accelerate). Experience: 3+ years in an AI/MLOps role, including experience in LLM fine-tuning and deployment. Hands-on work with model inference in production environments (both cloud and on-prem). Exposure to SaaS and modular product environments is a plus.
Posted 2 months ago
10 - 14 years
12 - 16 Lacs
Mumbai
Work from Office
Skill required: Delivery - Advanced Analytics Designation: I&F Decision Sci Practitioner Assoc Mgr Qualifications: Master of Engineering/Masters in Business Economics Years of Experience: 10 to 14 years What would you do? Data & AI You will be a core member of Accenture Operations global Data & AI group, an energetic, strategic, high-visibility and high-impact team, to innovate and transform the Accenture Operations business using machine learning, advanced analytics to support data-driven decisioning. What are we looking for? Extensive experience in leading Data Science and Advanced Analytics delivery teams Strong statistical programming experience – Python or working knowledge on cloud native platforms like AWS Sagemaker is preferred Azure/ GCP Experience working with large data sets and big data tools like AWS, SQL, PySpark, etc. Solid knowledge in at least more than two of the following – Supervised and Unsupervised Learning, Classification, Regression, Clustering, Neural Networks, Ensemble Modelling (random forest, boosted tree, etc) Experience in working with Pricing models is a plus Experience in atleast one of these business domains:Energy, CPG, Retail, Marketing Analytics, Customer Analytics, Digital Marketing, eCommerce, Health, Supply Chain Extensive experience in client engagement and business development Ability to work in a global collaborative team environment Quick Learner and Independently deliver results. Qualifications:Masters / Ph.D. Computer science, Engineering, Statistics, Mathematics, Economics or related disciplines. Roles and Responsibilities: Leading team of data scientists to build and deploy data science models to uncover deeper insights, predict future outcomes, and optimize business processes for clients. Refining and improving data science models based on feedback, new data, and evolving business needs. Analyze available data to identify opportunities for enhancing brand equity, improving retail margins, achieving profitable growth, and expanding market share for clients. Data Scientists in Operations follow multiple approaches for project execution from adapting existing assets to Operations use cases, exploring third-party and open-source solutions for speed to execution and for specific use cases to engaging in fundamental research to develop novel solutions. Data Scientists are expected to collaborate with other data scientists, subject matter experts, sales, and delivery teams from Accenture locations around the globe to deliver strategic advanced machine learning / data-AI solutions from design to deployment. Qualifications Master of Engineering,Masters in Business Economics
Posted 2 months ago
7 - 9 years
19 - 25 Lacs
Bengaluru
Work from Office
About The Role Job Title: Industry & Function AI Decision Science Manager + S&C GN Management Level:07 - Manager Location: Primary Bengaluru, Secondary Gurugram Must-Have Skills: Consumer Goods & Services domain expertise , AI & ML, Proficiency in Python, R, PySpark, SQL , Experience in cloud platforms (Azure, AWS, GCP) , Expertise in Revenue Growth Management, Pricing Analytics, Promotion Analytics, PPA/Portfolio Optimization, Trade Investment Optimization. Good-to-Have Skills: Experience with Large Language Models (LLMs) like ChatGPT, Llama 2, or Claude 2 , Familiarity with optimization methods, advanced visualization tools (Power BI, Tableau), and Time Series Forecasting Job Summary :As a Decision Science Manager , you will lead the design and delivery of AI solutions in the Consumer Goods & Services domain. This role involves working closely with clients to provide advanced analytics and AI-driven strategies that deliver measurable business outcomes. Your expertise in analytics, problem-solving, and team leadership will help drive innovation and value for the organization. Roles & Responsibilities: Analyze extensive datasets and derive actionable insights for Consumer Goods data sources (e.g., Nielsen, IRI, EPOS, TPM). Evaluate AI and analytics maturity in the Consumer Goods sector and develop data-driven solutions. Design and implement AI-based strategies to deliver significant client benefits. Employ structured problem-solving methodologies to address complex business challenges. Lead data science initiatives, mentor team members, and contribute to thought leadership. Foster strong client relationships and act as a key liaison for project delivery. Build and deploy advanced analytics solutions using Accenture's platforms and tools. Apply technical proficiency in Python, Pyspark, R, SQL, and cloud technologies for solution deployment. Develop compelling data-driven narratives for stakeholder engagement. Collaborate with internal teams to innovate, drive sales, and build new capabilities. Drive insights in critical Consumer Goods domains such as: Revenue Growth Management Pricing Analytics and Pricing Optimization Promotion Analytics and Promotion Optimization SKU Rationalization/ Portfolio Optimization Price Pack Architecture Decomposition Models Time Series Forecasting Professional & Technical Skills: Proficiency in AI and analytics solutions (descriptive, diagnostic, predictive, prescriptive, generative). Expertise in delivering large scale projects/programs for Consumer Goods clients on Revenue Growth Management - Pricing Analytics, Promotion Analytics, Portfolio Optimization, etc. Deep and clear understanding of typical data sources used in RGM programs POS, Syndicated, Shipment, Finance, Promotion Calendar, etc. Strong programming skills in Python, R, PySpark, SQL, and experience with cloud platforms (Azure, AWS, GCP) and proficient in using services like Databricks and Sagemaker. Deep knowledge of traditional and advanced machine learning techniques, including deep learning. Experience with optimization techniques (linear, nonlinear, evolutionary methods). Familiarity with visualization tools like Power BI, Tableau. Experience with Large Language Models (LLMs) like ChatGPT, Llama 2. Certifications in Data Science or related fields. Additional Information: The ideal candidate has a strong educational background in data science and a proven track record in delivering impactful AI solutions in the Consumer Goods sector. This position offers opportunities to lead innovative projects and collaborate with global teams. Join Accenture to leverage cutting-edge technologies and deliver transformative business outcomes. About Our Company | Qualifications Experience: Minimum 7-9 years of experience in data science, particularly in the Consumer Goods sector Educational Qualification: Bachelors or Masters degree in Statistics, Economics, Mathematics, Computer Science, or MBA (Data Science specialization preferred)
Posted 2 months ago
3 - 8 years
5 - 10 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Architecture Good to have skills : Amazon Web Services (AWS) Minimum 3 year(s) of experience is required Educational Qualification : 15 years full term education Job Title:AWS Data Engineer About The Role ::We are seeking a skilled AWS Data Engineer with expertise in AWS services such as Glue, Lambda, SageMaker, CloudWatch, and S3, coupled with strong Python/PySpark development skills. The ideal candidate will have a solid grasp of ETL concepts, proficient in writing complex SQL queries, and capable of handling client interactions independently. They should demonstrate a track record of efficiently resolving tickets, tasks, bugs, and enhancements within stipulated timelines. Good communication skills are essential, and basic knowledge of databases is preferred. Project Role Description :Design, build and configure applications to meet business process and application requirements. Must-Have Skills:AWS GlueAWS ArchitechtureAWS Expertise:Python/PySpark Development:SQL Mastery:Advanced knowledge of SQLClient Handling:Problem-Solving Skills:xCommunication Skills:Responsibilities:Develop and maintain AWS-based data solutions utilizing services like Glue, Lambda, SageMaker, CloudWatch,DynamoDb and S3. Implement ETL processes effectively within Glue jobs and PySpark scripts, ensuring optimal performance and reliability. Proficiently write and optimize complex SQL queries to extract, transform, and load data from various sources. Independently handle client interactions, understanding requirements, providing technical guidance, and ensuring client satisfaction. Resolve tickets, tasks, bugs, and enhancements promptly, meeting defined resolution timeframes. Communicate effectively with team members, stakeholders, and clients, providing updates, reports, and insights as required. Maintain a basic understanding of databases, supporting data-related activities and troubleshooting when necessary. Stay updated with industry trends, AWS advancements, and best practices, contributing to continuous improvement initiatives within the team. Requirements:Bachelor's degree in Computer Science, Engineering, or a related field.Proven experience working with AWS services, particularly Glue, Lambda, SageMaker, CloudWatch, and S3.Strong proficiency in Python and/or PySpark development for data processing and analysis.Solid understanding of ETL concepts, databases, and data warehousing principles.Excellent problem-solving skills and ability to work independently or within a team.Outstanding communication skills, both verbal and written, with the ability to interact professionally with clients and colleagues.Ability to manage multiple tasks concurrently and prioritize effectively in a dynamic work environment.Good to have:Basic knowledge of relational databases such as MySQL, PostgreSQL, or SQL Server. Educational Qualification:Bachelor Degree Qualifications 15 years full term education
Posted 2 months ago
5 - 10 years
27 - 30 Lacs
Kochi, Thiruvananthapuram
Work from Office
We are seeking a highly skilled and independent Senior Machine Learning Engineer Contractor to design, develop, and deploy advanced ML pipelines in an AWS environment. Key Responsibilities: Design, develop, and deploy robust and scalable machine learning models. Build and maintain ML pipelines for data preprocessing, model training, evaluation, and deployment. Collaborate with data scientists, data engineers, and product teams to identify ML use cases and develop prototypes. Optimize models for performance, accuracy, and scalability in real-time or batch systems. Monitor and troubleshoot deployed models to ensure ongoing performance. Location - Kochi, Trivandrum,Remote.
Posted 2 months ago
3 - 5 years
0 - 0 Lacs
Kochi
Work from Office
Job Summary: We are seeking a highly skilled Senior Python Developer with expertise in Machine Learning (ML) , Large Language Models (LLMs) , and cloud technologies . The ideal candidate will be responsible for end-to-end execution -- from requirement analysis and discovery to the design, development, and implementation of ML-driven solutions. The role demands both technical excellence and strong communication skills to work directly with clients, delivering POCs, MVPs, and scalable production systems. Key Responsibilities: Collaborate with clients to understand business needs and identify ML-driven opportunities. Independently design and develop robust ML models, time series models, deep learning solutions, and LLM-based systems. Deliver Proof of Concepts (POCs) and Minimum Viable Products (MVPs) with agility and innovation. Architect and optimize Python-based ML applications focusing on performance and scalability. Utilize GitHub for version control, collaboration, and CI/CD automation. Deploy ML models on cloud platforms such as AWS, Azure, or GCP . Follow best practices in software development including clean code, automated testing, and thorough documentation. Stay updated with evolving trends in ML, LLMs, and cloud ecosystem. Work collaboratively with Data Scientists, DevOps engineers, and Business Analysts. Must-Have Skills: Strong programming experience in Python and frameworks such as FastAPI, Flask, or Django . Solid hands-on expertise in ML using Scikit-learn, TensorFlow, PyTorch, Prophet , etc. Experience with LLMs (e.g., OpenAI, LangChain, Hugging Face , vector search). Proficiency in cloud services like AWS (S3, Lambda, SageMaker) , Azure ML , or GCP Vertex AI . Strong grasp of software engineering concepts: OOP, design patterns, data structures . Experience in version control systems ( Git/GitHub/GitLab ) and setting up CI/CD pipelines . Ability to work independently and solve complex problems with minimal supervision. Excellent communication and client interaction skills. Required Skills Python,Machine Learning,Machine Learning Models
Posted 2 months ago
5 - 10 years
10 - 20 Lacs
Pune
Hybrid
Experienced in AI Ops Engineer role focuses on deploying, monitoring, and scaling AI/GenAI models using MLOps, CI/CD, cloud (AWS/Azure/GCP), Python, Kubernetes, MLflow, security, and automation.
Posted 2 months ago
5 - 10 years
8 - 18 Lacs
Pune
Hybrid
Experienced AI Engineer with 4+ years in deploying scalable ML solutions on cloud platforms like AWS, Azure, and GCP and Skilled in Python, SQL, Kubernetes, and MLOps practices including CI/CD and model monitoring.
Posted 2 months ago
6 - 10 years
15 - 19 Lacs
Bengaluru
Work from Office
As a Principal Data Engineer on the Marketplace team, you will be responsible for analysing and interpreting complex datasets to generate insights that directly influence business strategy and decision-making. You will apply advanced statistical analysis and predictive modelling techniques to identify trends, predict future outcomes, and assess data quality. These insights will drive data-driven decisions and strategic initiatives across the organization. The Marketplace team is responsible for building the services where our customers will go to purchase pre-configured software installations on the platform of their choice. The challenges here are across the entire stack, from back-end distributed services operating at cloud scale, to e-commerce transactions, to the actual web apps that users interact with. This is the perfect role for someone experienced in designing distributed systems, writing and debugging code across an entire stack (UI, APIs, databases, cloud infrastructure services), championing operational excellence, mentoring junior engineers, driving development process improvements and excellence in a start-up style environment. Career Level - IC4 Responsibilities As a Principal Data Engineer, you will be at the forefront of Oracles data initiatives, playing a pivotal role in transforming raw data into actionable insights. Collaborating with data scientists and business stakeholders, you will design scalable data pipelines, optimize data infrastructure, and ensure the availability of high-quality datasets for strategic analysis. This role goes beyond data engineering, requiring hands-on involvement in statistical analysis and predictive modeling. You will use techniques such as regression analysis, trend forecasting, and time-series modeling to extract meaningful insights from data, directly supporting business decision-making. Basic Qualifications: 7+ years of experience in data engineering and analytics, with a strong background in designing scalable database architectures, building and optimizing data pipelines, and applying statistical analysis to deliver strategic insights across complex, high-volume data environments Deep knowledge of big data frameworks such as Apache Spark, Apache Flink, Apache Airflow, Presto, Kafka, and data warehouse solutions. Experience working with other cloud platform teams and accommodating requirements from those teams (compute, networking, search, store). Excellent written and verbal communication skills with the ability to present complex information in a clear, concise manner to all audiences Design and optimize database structures to ensure scalability, performance, and reliability within Oracle ADW and OCI environments. This includes maintaining schema integrity, managing database objects, and implementing efficient table structures that support seamless reporting and analytical needs. Build and manage data pipelines that automate the flow of data from diverse sources into Oracle databases, using ETL processes to transform data for analysis and reporting. Conduct data quality assessments, identify anomalies, and validate the accuracy of data ingested into our systems. Working alongside data governance teams, you will establish metrics to measure data quality and implement controls to uphold data integrity, ensuring reliable data for stakeholders. Mentor junior team members and share best practices in data analysis, modeling, and domain expertise. Preferred Qualifications: Solid understanding of statistical methods, hypothesis testing, data distribution, regression analysis, and probability. Proficiency in Python for data analysis and statistical modeling. Experience with libraries like pandas, NumPy, and SciPy. Knowledge of methods and techniques for data quality assessment, anomaly detection, and validation processes. Skills in defining data quality metrics, creating data validation rules, and implementing controls to monitor and uphold data integrity. Familiarity with visualization tools (e.g., Tableau, Power BI, Oracle Analytics Cloud) and libraries (e.g., Matplotlib, Seaborn) to convey insights effectively. Strong communication skills for collaborating with stakeholders and translating business goals into technical data requirements. Ability to contextualize data insights in business terms and to present findings to non-technical stakeholders in a meaningful way. Ability to cleanse, transform, and aggregate data from various sources, ensuring its ready for analysis. Experience with relational database management and design, specifically in Oracle environments (e.g., Oracle Autonomous Data Warehouse, Oracle Database). Skills in designing, maintaining, and optimizing database schemas to ensure efficiency, scalability, and reliability. Advanced SQL skills for complex queries, indexing, stored procedures, and performance tuning. Experience with ETL tools such as Oracle Data Integrator (ODI), or other data integration frameworks.
Posted 2 months ago
3 - 6 years
11 - 19 Lacs
Hyderabad
Work from Office
Job Description : Data Engineer at Mirabel Technologies should be an avid programmer of Java, Python, R or Scala with expertise in implementing complex algorithms. Data Engineer will work on collecting, storing, processing, and analyzing huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company in various products. Skillset : 1. Proficient understanding of distributed computing principles 2. Ability to build, run and manage large clusters 3. Hadoop v2, MapReduce, HDFS 4. Java, Python 5. Large Scale crawling: Scrapy, Nutch and custom crawling solutions 6. Experience with Apache Solr Lucene 7. NoSQL databases, such as MongoDB, HBase, Cassandra 8. Knowledge of various ETL techniques and frameworks, such as Flume 9. Experience with NLP tools and systems for POS, NER, and Information extraction 10. Experience with Machine Learning - Regression, Classification, Decision Trees. 11. Experience with Linux / AWS. Experience : 4-5 years Key skills: NLP, LLM, AWS Sagemaker, Deep Learning, and No Sql Database.
Posted 2 months ago
3 - 6 years
5 - 8 Lacs
Pune
Work from Office
Technical Infrastructure: Heres just some of what we use: AWS (EC2, IAM, EKS etc.), Terraform Enterprise, Docker, Kubernetes, Aurora, Mesos, HashiCorp Vault and Consul Datadog and PagerDuty Microservices architecture, Spring, Java & NodeJS, React, Koa, Express.js. Amazon RDS, Dynamo DB, Postgres, Oracle, MySQL, GitHub, Jenkins, Concourse CI , Jfrog Artifactory About the role: You will constantly be asking; what are the most important infrastructure problems we need to solve for today that will increase our applications and infrastructures reliability and performance. You will apply your deep technical knowledge, taking a broad look at our technology infrastructure. Youll help us identify common and systematic issues and validate these, prioritizing which to strategically address first. We value collaboration. So, you will partner with our SRE/DevOps team, discussing and refining your ideas and preparing proof of concepts. You will present and validate these across technology teams, figuring out the best solution and youll be given ownership to engineer and implement your solutions. Theres lot of interesting technology problems for you to solve, so you are constantly applying latest thinking. These include, implementing Canary, designing a new automated pipeline solution, extension of Kubernetes capabilities, implementation of machine learning to build load testing, ensuring mutability of containerization etc. You will get to evaluate existing technologies and design the future state without being afraid to challenge the status quo. And youll regularly review existing infrastructure, looking for opportunities to improve (E.g. service improvements, cost reduction, security, performance). Youll also get to automate everything necessary, combining reliability with a pragmatic approach, doing it right, first time. Were continuing our journey of making our code and configuration deployments self-serve for our development teams. Youll help us build and maintain the right tooling and youll have ownership to design and implement the infrastructure needed Youll also be involved in the daily management of our AWS infrastructure. This means working with our Agile development teams, to troubleshoot server, application, and performance issues Skills & Experience: Relevant 3 to 6 years hands-on SRE/DevOps experience in an Agile environment Substantial experience with AWS services in a production environment. Demonstrated expertise in managing and modernizing legacy systems and infrastructure. Youll be able to collaborate effectively with both engineers and operations, and be comfortable recommending best practices You have the expertise and skills to navigate the AWS ecosystem and will know when and where to recommend the most appropriate service, and/or usage pattern. You have experienced resolving outages, and are able to quickly diagnose issues and been instrumental in restoring normal service levels You have an intellectual curiosity, and an appetite to learn more Strong hands-on experience working with Linux environments; Windows experience is a plus. Strong Proficiency in scripting languages (e.g., Bash, Python) for automation and process optimization. Experience with CI/CD tools such as Jenkins, GitHub Actions, Concourse CI preferably. Expertise in containerization technologies like Docker and orchestration tools such as Kubernetes. Practical experience managing event-driven systems, messaging queues, and load balancers. Strong understanding of monitoring, logging, and observability tools to ensure system reliability. Good to have Datadog, Pager duty exposure. Proven ability to troubleshoot critical outages, identify root causes, and restore service quickly. Proficiency in HashiCorp technologies including Terraform IaC, Vault (Secret management) and Consul (service discovery and config management). Youll also have significant experience and/or an interest in the following: Managing cloud infrastructure as code preferably using Terraform Application Container Management and orchestration primarily in Kubernetes environments preferably AWS EKS Maintaining managed databases including AWS RDS. Experience in how to tune, scale and how performance and reliability are achieved. Good understanding of PKI infrastructure and CDN technologies including AWS Cloud front Expertise in AWS security including AWS IAM service. Experience with AWS lambda, AWS Sagemaker. Experience working and strong understanding with firewalls, network and application load balancing. A strong and informed point of view with respect to monitoring tools and how best to use them. Ability to work cloud-based environments spanning multiple AWS accounts management and integration.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough