Home
Jobs

661 Sagemaker Jobs - Page 9

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Job Information Number of Positions 1 Industry Engineering Date Opened 06/09/2025 Job Type Permanent Work Experience 2-3 years City Bangalore State/Province Karnataka Country India Zip/Postal Code 560037 Location Bangalore About Us CloudifyOps is a company with DevOps and Cloud in our DNA. CloudifyOps enables businesses to become more agile and innovative through a comprehensive portfolio of services that addresses hybrid IT transformation, Cloud transformation, and end-to-end DevOps Workflows. We are a proud Advance Partner of Amazon Web Services and have deep expertise in Microsoft Azure and Google Cloud Platform solutions. We are passionate about what we do. The novelty and the excitement of helping our customers accomplish their goals drives us to become excellent at what we do. Job Description Culture at CloudifyOps : Working at CloudifyOps is a rewarding experience! Great people, a work environment that thrives on creativity, and the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. About the Role: We are seeking a proactive and technically skilled AI/ML Engineer with 2–3 years of experience to join our growing technology team. The ideal candidate will have hands-on expertise in AWS-based machine learning, Agentic AI, and Generative AI tools, especially within the Amazon AI ecosystem. You will play a key role in building intelligent, scalable solutions that address complex business challenges. Key Responsibilities: 1. AWS-Based Machine Learning Develop, train, and fine-tune ML models on AWS SageMaker, Bedrock, and EC2. Implement serverless ML workflows using Lambda, Step Functions, and EventBridge. Optimize models for cost/performance using AWS Inferentia/Trainium. 2. MLOps & Productionization Build CI/CD pipelines for ML using AWS SageMaker Pipelines, MLflow, or Kubeflow. Containerize models with Docker and deploy via AWS EKS/ECS/Fargate. Monitor models in production using AWS CloudWatch, SageMaker Model Monitor. 3. Agentic AI Development Design autonomous agent systems (e.g., AutoGPT, BabyAGI) for task automation. Integrate multi-agent frameworks (LangChain, AutoGen) with AWS services. Implement RAG (Retrieval-Augmented Generation) for agent knowledge enhancement. 4. Generative AI & LLMs Fine-tune and deploy LLMs (GPT-4, Claude, Llama 2/3) using LoRA/QLoRA. Build Generative AI apps (chatbots, content generators) with LangChain, LlamaIndex. Optimize prompts and evaluate LLM performance using AWS Bedrock/Amazon Titan. 5. Collaboration & Innovation Work with cross-functional teams to translate business needs into AI solutions. Collaborate with DevOps and Cloud Engineering teams to develop scalable, production-ready AI systems. Stay updated with cutting-edge AI research (arXiv, NeurIPS, ICML). 5. Governance & Documentation Implement model governance frameworks to ensure ethical AI/ML deployments. Design reproducible ML pipelines following MLOps best practices (versioning, testing, monitoring). Maintain detailed documentation for models, APIs, and workflows (Markdown, Sphinx, ReadTheDocs). Create runbooks for model deployment, troubleshooting, and scaling. Technical Skills Programming: Python (PyTorch, TensorFlow, Hugging Face Transformers). AWS: SageMaker, Lambda, ECS/EKS, Bedrock, S3, IAM. MLOps: MLflow, Kubeflow, Docker, GitHub Actions/GitLab CI. Generative AI: Prompt engineering, LLM fine-tuning, RAG, LangChain. Agentic AI: AutoGPT, BabyAGI, multi-agent orchestration. Data Engineering: SQL, PySpark, AWS Glue/EMR. Soft Skills Strong problem-solving and analytical thinking. Ability to explain complex AI concepts to non-technical stakeholders. What We’re Looking For Bachelor’s/Master’s in CS, AI, Data Science, or related field. 2-3 years of industry experience in AI/ML engineering. Portfolio of deployed ML/AI projects (GitHub, blog, case studies). Good to have an AWS Certified Machine Learning Specialty certification. Why Join Us? Innovative Projects: Work on cutting-edge AI applications that push the boundaries of technology. Collaborative Environment: Join a team of passionate engineers and researchers committed to excellence. Career Growth: Opportunities for professional development and advancement in the rapidly evolving field of AI. Equal opportunity employer CloudifyOps is proud to be an equal opportunity employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, color, sex, religion, national origin, disability, pregnancy, marital status, sexual orientation, gender reassignment, veteran status, or other protected category.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Bangalore,Karnataka,India Job ID 764972 Join our Team Ericsson’s R&D Data team is seeking a visionary and technically exceptional Principal Machine Learning Engineer to lead the design, development, and deployment of advanced ML systems at scale. This role sits at the strategic intersection of machine learning, data engineering, and cloud-native architecture—shaping the next generation of AI-driven services at Ericsson. As a senior technical leader, you will architect and guide the end-to-end ML lifecycle, from data strategy and engineering to large-scale model deployment and continuous optimization. You’ll partner closely with data engineers, software developers, and product stakeholders, while mentoring a high-performing team of ML engineers. Your work will help scale intelligent systems that power mission-critical R&D and network solutions. Key Responsibilities: Architect and implement scalable ML solutions, deeply integrated with robust and reliable data pipelines. Own the complete ML lifecycle: data ingestion, preprocessing, feature engineering, model design, training, evaluation, deployment, and monitoring. Design and optimize data architectures supporting both batch and streaming ML use cases. Collaborate with data engineering teams to build real-time and batch pipelines using managed streaming platforms such as Kafka or equivalent technologies. Guide the development and automation of ML workflows using modern MLOps and CI/CD practices. Mentor and lead ML engineers, establishing engineering best practices and fostering a high-performance, collaborative culture. Align ML efforts with business objectives by working cross-functionally with data scientists, engineers, and product managers. Stay current with the latest ML and data engineering advancements, integrating emerging tools and frameworks into scalable, production-ready systems. Champion responsible AI practices including model governance, explainability, fairness, and compliance. Required Qualifications: 8+ years of experience in machine learning, applied AI, or data science with a proven record of delivering ML systems at scale. 3+ years of experience in data engineering or building ML-supportive data infrastructure and pipelines. Advanced degree (MS or PhD preferred) in Computer Science, Data Engineering, Machine Learning, or a related technical field. Proficient in Python (preferred), with experience in Java or C++ for backend or performance-critical tasks. Deep expertise with ML frameworks (TensorFlow, PyTorch, JAX) and cloud platforms, especially AWS (SageMaker, Lambda, Step Functions, S3, etc.). Experience with managed streaming data platforms such as Amazon MSK, or similar technologies for real-time ML data pipelines. Experience with distributed systems and data processing tools such as Spark, Airflow, and AWS Glue. Fluency in MLOps best practices, including CI/CD, model versioning, observability, and automated retraining pipelines. Strong leadership skills with experience mentoring engineers and influencing technical direction across teams. Excellent collaboration and communication skills, with the ability to align ML strategy with product and business needs.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Greater Chennai Area

Remote

Linkedin logo

Do you want to make a global impact on patient health? Join Pfizer Digital’s Artificial Intelligence, Data, and Advanced Analytics organization (AIDA) to leverage cutting-edge technology for critical business decisions and enhance customer experiences for colleagues, patients, and physicians. Our team is at the forefront of Pfizer’s transformation into a digitally driven organization, using data science and AI to change patients’ lives. The Data Science Industrialization team leads engineering efforts to advance AI and data science applications from POCs and prototypes to full production. As a Senior Manager, AI and Analytics Data Engineer, you will be part of a global team responsible for designing, developing, and implementing robust data layers that support data scientists and key advanced analytics/AI/ML business solutions. You will partner with cross-functional data scientists and Digital leaders to ensure efficient and reliable data flow across the organization. You will lead development of data solutions to support our data science community and drive data-centric decision-making. Join our diverse team in making an impact on patient health through the application of cutting-edge technology and collaboration. Role Responsibilities Lead development of data engineering processes to support data scientists and analytics/AI solutions, ensuring data quality, reliability, and efficiency As a data engineering tech lead, enforce best practices, standards, and documentation to ensure consistency and scalability, and facilitate related trainings Provide strategic and technical input on the AI ecosystem including platform evolution, vendor scan, and new capability development Act as a subject matter expert for data engineering on cross functional teams in bespoke organizational initiatives by providing thought leadership and execution support for data engineering needs Train and guide junior developers on concepts such as data modeling, database architecture, data pipeline management, data ops and automation, tools, and best practices Stay updated with the latest advancements in data engineering technologies and tools and evaluate their applicability for improving our data engineering capabilities Direct data engineering research to advance design and development capabilities Collaborate with stakeholders to understand data requirements and address them with data solutions Partner with the AIDA Data and Platforms teams to enforce best practices for data engineering and data solutions Demonstrate a proactive approach to identifying and resolving potential system issues. Communicate the value of reusable data components to end-user functions (e.g., Commercial, Research and Development, and Global Supply) and promote innovative, scalable data engineering approaches to accelerate data science and AI work Basic Qualifications Bachelor's degree in computer science, information technology, software engineering, or a related field (Data Science, Computer Engineering, Computer Science, Information Systems, Engineering, or a related discipline). 7+ years of hands-on experience in working with SQL, Python, object-oriented scripting languages (e.g. Java, C++, etc..) in building data pipelines and processes. Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. Recognized by peers as an expert in data engineering with deep expertise in data modeling, data governance, and data pipeline management principles In-depth knowledge of modern data engineering frameworks and tools such as Snowflake, Redshift, Spark, Airflow, Hadoop, Kafka, and related technologies Experience working in a cloud-based analytics ecosystem (AWS, Snowflake, etc.) Familiarity with machine learning and AI technologies and their integration with data engineering pipelines Demonstrated experience interfacing with internal and external teams to develop innovative data solutions Strong understanding of Software Development Life Cycle (SDLC) and data science development lifecycle (CRISP) Highly self-motivated to deliver both independently and with strong team collaboration Ability to creatively take on new challenges and work outside comfort zone. Strong English communication skills (written & verbal) Preferred Qualifications Advanced degree in Data Science, Computer Engineering, Computer Science, Information Systems, or a related discipline (preferred, but not required) Experience in software/product engineering Experience with data science enabling technology, such as Dataiku Data Science Studio, AWS SageMaker or other data science platforms Familiarity with containerization technologies like Docker and orchestration platforms like Kubernetes. Experience working effectively in a distributed remote team environment Hands on experience working in Agile teams, processes, and practices Expertise in cloud platforms such as AWS, Azure or GCP. Proficiency in using version control systems like Git. Pharma & Life Science commercial functional knowledge Pharma & Life Science commercial data literacy Ability to work non-traditional work hours interacting with global teams spanning across the different regions (e.g.: North America, Europe, Asia) Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

MLOps Engineer Location- PAN India Job Responsibilities Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required Experience And Qualifications Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role – Deep Learning Engineer & Data Scientist Job Description Location - PAN India Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem Improve Model accuracy to deliver greater business impact Estimate business impact due to deployment of model Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch Experience in using Deep learning models with text, speech, image and video data Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV Knowledge of State of the art Deep learning algorithms Optimize and tune Deep Learnings model for best possible accuracy Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau Work with application teams, in deploying models on cloud as a service or on-prem Deployment of models in Test / Control framework for tracking Build CI/CD pipelines for ML model deployment Integrating AI&ML models with other applications using REST APIs and other connector technologies Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

What You’ll Do Handle data: pull, clean, and shape structured & unstructured data. Manage pipelines: Airflow / Step Functions / ADF… your call. Deploy models: build, tune, and push to production on SageMaker, Azure ML, or Vertex AI. Scale: Spark / Databricks for the heavy lifting. Automate processes: Docker, Kubernetes, CI/CD, MLFlow, Seldon, Kubeflow. Collaborate effectively: work with engineers, architects, and business professionals to solve real problems promptly. What You Bring 3+ years hands-on MLOps (4-5 yrs total software experience). Proven experience with one hyperscaler (AWS, Azure, or GCP). Confidence with Databricks / Spark, Python, SQL, TensorFlow / PyTorch / Scikit-learn. Extensive experience handling and troubleshooting Kubernetes and proficiency in Dockerfile management. Prototyping with open-source tools, selecting the appropriate solution, and ensuring scalability. Analytical thinker, team player, with a proactive attitude. Nice-to-Haves Sagemaker, Azure ML, or Vertex AI in production. Dedication to clean code, thorough documentation, and precise pull requests. Skills: mlflow,ml ops,scikit-learn,airflow,mlops,sql,pytorch,adf,step functions,kubernetes,gcp,kubeflow,python,databricks,tensorflow,aws,azure,docker,seldon,spark Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Position Summary A data scientist cum data engineer with 5+ years of experience with a deep understanding of data analysis, Big Data & Cloud technologies, AI & machine learning, GenAI and NLP techniques. You will be responsible for developing & building low level design as per approved Tech specs, extracting valuable insights from large datasets, building advanced AIML models and driving data-driven decision-making within the organization or for the enterprise customers. This role involves a combination of data analysis, model development, and strategic thinking to solve complex business problems. You will interact with key stakeholders and apply your technical proficiency in AIML, Python, R and algorithms. You will work across different stages of the development project using data science & technologies to provide solutions and interface directly with enterprise customers for the Adobe Experience Platform. What You’ll Do Interface with Adobe customers to gather requirements, map solutions & make recommendations. Document projects with clear business objectives, provides data gathering & data preparation, final algorithm, detailed set of results Experience in Natural Language Processing, GenAI and Image processing Support the execution of Data Science solutions to business problems Innovate on new ideas to solve customer needs & assist to create GTM strategies for new solutions Experience in AIML modeling like propensity models, clusterings, regression, etc. Developing ETL pipelines involving big data. Developing data processing\analytics applications primarily using PySpark. Experience of developing applications on cloud(AWS/Azure/GCP) mostly using services related to storage, compute, ETL, DWH, Analytics and streaming. Clear understanding and ability to implement distributed storage, processing and scalable applications. Experience in working with SQL and NoSQL database. Ability to write and analyze SQL, HQL and other query languages for NoSQL databases. Proficiency in writing disitributed & scalable data processing code using PySpark, Python and related libraries. Experience in developing applications that consume the services exposed as ReST APIs. Understand and clean datasets, interpret outputs and make them comprehensible to understand for other teams. Strong collaboration with consultants onshore & offshore Create reusable statistical models & modify existing algorithms Report on customer trends and deployment performance and identify areas that we can use target using ML/Data science solutions. Requirements 2+ years of experience & knowledge with Web Analytics or Digital Marketing. 5+ years of experience in AIML role, with a focus on building data pipelines for conducting data intensive analysis 5+ years of experience with Machine Learning and alogrithims for classification, clustering, prediction, recommendations and NLP 5+ years of experience with common Data Science Toolkits (i.e. R, Jupyter Notebooks, PySpark etc.) 5+ years of enterprise development using Python Strong understanding of GenAI and LLM Ability to enhance Standard Algorithms is required Knowledge & experience using Amazon Sagemaker, Microsoft Azure ML or Google AI technologies 5+ years of complex SQL experience 5+ years of Data Modeling experience Demonstrate exceptional organizational skills and ability to multi-task simultaneous different customer projects Strong verbal & written communication skills to lead customers to a successful outcome and explain complex technical concepts to non-technical stakeholders. Excellent problem-solving and critical-thinking skills. Experience with big data technologies and cloud platforms (e.g., AWS, Azure, Google Cloud) Must be self-managed, proactive and customer focused Degree in Computer Science, Information Systems or related field Should be able to work in teams Special Consideration given for Experience & knowledge with Adobe Experience Cloud solutions Experience & knowledge with Web Analytics or Digital Marketing Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description Summary As a Staff Machine Learning Engineer, you will play a crucial role in bridging the gap between data science and production, ensuring the seamless integration and deployment of machine learning models into operational systems. You will be responsible for designing, implementing, and managing the infrastructure and workflows necessary to deploy, monitor, and maintain machine learning models at scale. GE HealthCare is a leading global medical technology and digital solutions innovator. Our purpose is to create a world where healthcare has no limits. Unlock your ambition, turn ideas into world-changing realities, and join an organization where every voice makes a difference, and every difference builds a healthier world. Responsibilities Job Description Model Deployment and Integration: Collaborate with data scientists to optimize, package and deploy machine learning models into production environments efficiently and reliably. Infrastructure Design and Maintenance: Design, build, and maintain scalable and robust infrastructure for model deployment, monitoring, and management. This includes containerization, orchestration, and automation of deployment pipelines. Continuous Integration/Continuous Deployment (CI/CD): Implement and manage CI/CD pipelines for automated model training, testing, and deployment. Model Monitoring and Performance Optimization: Develop monitoring and alerting systems to track the performance of deployed models and identify anomalies or degradation in real-time. Implement strategies for model retraining and optimization. Data Management and Version Control: Establish processes and tools for managing data pipelines, versioning datasets, and tracking changes in model configurations and dependencies. Security and Compliance: Ensure the security and compliance of deployed models and associated data. Implement best practices for data privacy, access control, and regulatory compliance. Documentation and Knowledge Sharing: Document deployment processes, infrastructure configurations, and best practices. Provide guidance and support to other team members on MLOps practices and tools. Collaboration and Communication: Collaborate effectively with cross-functional teams, including data scientists and business stakeholders. Communicate technical concepts and solutions to non-technical audiences. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, or related field. Excellent programming skills in languages such as Python. Experience with machine learning frameworks and libraries (e.g., TensorFlow, PyTorch, scikit-learn). Proficiency in cloud platforms such as AWS, Azure and related services (e.g., AWS SageMaker, Azure ML). Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). Familiarity with DevOps practices and tools (e.g., Git, Jenkins, Terraform). Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Familiarity with software engineering principles and best practices (e.g., version control, testing, debugging). Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Ability to work effectively in a fast-paced and dynamic environment. Preferred Qualifications Experience with big data technologies (e.g., Hadoop, Spark). Knowledge of microservices architecture and distributed systems. Certification in relevant technologies or methodologies (e.g., AWS Certified Machine Learning Specialty, Kubernetes Certified Administrator). Experience with data engineering and ETL processes. Understanding of machine learning concepts and algorithms. Understanding of Large Language Models (LLM) and Foundation Models (FM). Certification in machine learning or related fields. Joining our team as a Staff ML Engineer offers an exciting opportunity to work at the intersection of data science, software engineering, and operations, contributing to the development and deployment of cutting-edge machine learning solutions. If you are passionate about leveraging technology to drive business value and thrive in a collaborative and innovative environment, we encourage you to apply. GE HealthCare is an Equal Opportunity Employer where inclusion matters. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. We expect all employees to live and breathe our behaviors: to act with humility and build trust; lead with transparency; deliver with focus, and drive ownership – always with unyielding integrity. Our total rewards are designed to unlock your ambition by giving you the boost and flexibility you need to turn your ideas into world-changing realities. Our salary and benefits are everything you’d expect from an organization with global strength and scale, and you’ll be surrounded by career opportunities in a culture that fosters care, collaboration and support. Additional Information Relocation Assistance Provided: No Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

2 - 10 Lacs

Bengaluru

On-site

OPENTEXT OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. Your Impact: We are seeking a skilled and experienced Software Engineer with expertise in Large Language Models (LLM), Java,Python, Kubernetes, Helm and cloud technologies like AWS. The ideal candidate will contribute to designing, developing, and maintaining scalable software solutions using microservices architecture. This role offers an exciting opportunity to work with cutting-edge technologies in a collaborative environment. What the role offers: Design, develop, troubleshoot and debug software programs for software enhancements and new products. Integrate Large Language Models (LLMs) into business applications to enhance functionality and user experience. Develop and maintain transformer-based models. Develop RESTful APIs and ensure seamless integration across services. Collaborate with cross-functional teams to gather requirements and translate them into technical solutions. Implement best practices for cloud-native development using AWS services like EC2, Lambda, SageMaker, S3 etc. Deploy, manage, and scale containerized applications using Kubernetes (K8S) and Helm. Designs enhancements, updates, and programming changes for portions and subsystems of application software, utilities, databases, and Internet-related tools. Analyses design and determines coding, programming, and integration activities required based on general objectives and knowledge of overall architecture of product or solution. Collaborates and communicates with management, internal, and outsourced development partners regarding software systems design status, project progress, and issue resolution. Represents the software systems engineering team for all phases of larger and more-complex development projects. Ensure system reliability, security, and performance through effective monitoring and troubleshooting. Write clean, efficient, and maintainable code following industry standards. Participate in code reviews, mentorship, and knowledge-sharing within the team. What you need to succeed: Bachelor's or Master's degree in Computer Science, Information Systems, or equivalent. Typically, 3-5 years of experience Strong understanding of Large Language Models (LLM) and experience applying them in real-world applications. Expertise in Elastic Search or similar search and indexing technologies. Expertise in designing and implementing microservices architecture. Solid experience with AWS services like EC2, VPC, ECR, EKS, SageMaker etc. for cloud deployment and management. Proficiency in container orchestration tools such as Kubernetes (K8S) and packaging/deployment tools like Helm. Strong problem-solving skills and the ability to troubleshoot complex issues. Strong experience in Java and Python development, with proficiency in frameworks like Spring Boot or Java EE. Should have good hands-on experience in designing and writing modular object-oriented code. Good knowledge of REST APIs, Spring, Spring boot, Hibernate. Excellent analytical, troubleshooting and problem-solving skills. Ability to demonstrate effective teamwork both within the immediate team and across teams. Experience in working with version control and build tools like GIT, GitLab, Maven and Jenkins, GitLab CI. Excellent communication and collaboration skills. Familiarity with Python for LLM-related tasks. Working knowledge in RAG Experience working with NLP frameworks such as Hugging Face, OpenAI, or similar. Knowledge of database systems like PostgreSQL, MongoDB, or DynamoDB. Experience with observability tools like Prometheus, Grafana, or ELK Stack. Experience in working with event-driven architectures and messaging systems (e.g., Kafka, RabbitMQ). Experience with CI/CD pipelines, DevOps practices, and infrastructure as code (e.g., Terraform, CloudFormation). Familiar with Agile framework/SCRUM development methodologies One last thing: OpenText is more than just a corporation, it's a global community where trust is foundational, the bar is raised, and outcomes are owned.Join us on our mission to drive positive change through privacy, technology, and collaboration. At OpenText, we don't just have a culture; we have character. Choose us because you want to be part of a company that embraces innovation and empowers its employees to make a difference. OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us athr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.

Posted 1 week ago

Apply

3.0 - 5.0 years

5 - 12 Lacs

Coimbatore

On-site

Job Type: Full-time, Permanent Job mode: On-Site/Hybrid Joining: Open to immediate joiners and candidates available within 1-2 weeks. Sense7AI Data Solutions is seeking a highly skilled and forward-thinking AI/ML Engineer to join our dynamic team. You will play a critical role in designing, developing, and deploying state-of-the-art AI solutions using both classical machine learning and cutting-edge generative AI technologies. The ideal candidate is not only technically proficient but also deeply familiar with modern AI tools, frameworks, and prompt engineering strategies. Key Responsibilities Design, build, and deploy end-to-end AI/ML solutions tailored to real-world business challenges. Leverage the latest advancements in Generative AI, LLMs (e.g., GPT, Claude, LLaMA), and multimodal models for intelligent applications. Develop, fine-tune, and evaluate custom language models using transfer learning and prompt engineering. Work with traditional ML models and deep learning architectures (CNNs, RNNs, Transformers) for diverse applications such as NLP, computer vision, and time-series forecasting. Create and maintain scalable ML pipelines using MLOps best practices. Collaborate with cross-functional teams (data engineers, product managers, business analysts) to understand domain needs and translate them into AI solutions. Stay current on the evolving AI landscape, including open-source tools, academic research, cloud-native AI services, and responsible AI practices. Ensure AI model transparency, fairness, bias mitigation, and compliance with data governance standards. Required Skills & Qualifications Education: Any degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Experience: 3 – 5 years of hands-on experience in AI/ML solution development, model deployment, and experimentation. Technical Proficiency: Programming: Python (strong), familiarity with Bash/CLI, and Git . Frameworks: TensorFlow, PyTorch, Hugging Face Transformers, Scikit-learn . GenAI & LLM Tools: LangChain, OpenAI APIs, Anthropic, Vertex AI, PromptLayer, Weights & Biases. Prompt Engineering: Experience crafting, testing, and optimizing prompts for LLMs across multiple platforms. Cloud & MLOps: AWS/GCP/Azure (SageMaker, Vertex AI, Azure ML), Docker, Kubernetes, MLflow. Data: SQL, NoSQL, BigQuery, Spark, Hadoop; data wrangling, cleansing, and feature engineering. Strong grasp of model evaluation techniques, fine-tuning strategies, and A/B testing. Preferred Qualifications Experience with AutoML , reinforcement learning, vector databases (e.g., Milvus, FAISS), or RAG (Retrieval-Augmented Generation). Familiarity with deploying LLMs and GenAI systems in production environments. Hands-on experience with open-source LLMs and fine-tuning (e.g., LLaMA, Mistral, Falcon, Open LLaMA). Understanding of AI compliance, data privacy, ethical AI, and explainability (XAI). Strong problem-solving skills and the ability to work in fast-paced, evolving tech landscapes. Excellent written and verbal communication in English. Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,200,000.00 per year Benefits: Flexible schedule Health insurance Paid time off Provident Fund Experience: Machine learning: 3 years (Preferred) AI: 3 years (Preferred) Work Location: In person

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

On-site

Agnito Technologies is Hiring: Machine Learning Ops Engineer (MLOps) Location: Bhopal (Work From Office) Vacancy: 1 Experience: 5+ years managing ML pipelines in production Package: No bar for the right candidate Key Responsibilities: Design, implement, and manage end-to-end machine learning pipelines in production environments Automate model deployment workflows using CI/CD tools Containerize ML models and services with Docker and orchestrate them using Kubernetes Work with platforms like MLflow, Kubeflow, and Seldon for experiment tracking and model management Deploy and monitor models in AWS SageMaker or similar cloud environments Collaborate with data scientists, DevOps, and software engineers to ensure smooth production rollouts Key Skills: MLflow, Kubeflow, Seldon Docker, Kubernetes CI/CD for ML models AWS SageMaker or equivalent cloud ML platforms Strong understanding of MLOps principles and real-time production deployment Eligibility: 5+ years of hands-on experience in managing ML pipelines and deployments Proven experience in MLOps tools and practices in production environments Job Type: Full-time Schedule: Day shift Work Location: In person

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Scientist Location: Remote Job Type: Full-Time | Permanent Experience Required: 4+ Years About the Role: We are looking for a highly motivated and analytical Data Scientist with 4 years of industry experience to join our data team. The ideal candidate will have a strong background in Python , SQL , and experience deploying machine learning models using AWS SageMaker . You will be responsible for solving complex business problems with data-driven solutions, developing models, and helping scale machine learning systems into production environments. Key Responsibilities: Model Development: Design, develop, and validate machine learning models for classification, regression, and clustering tasks. Work with structured and unstructured data to extract actionable insights and drive business outcomes. Deployment & MLOps: Deploy machine learning models using AWS SageMaker , including model training, tuning, hosting, and monitoring. Build reusable pipelines for model deployment, automation, and performance tracking. Data Exploration & Feature Engineering: Perform data wrangling, preprocessing, and feature engineering using Python and SQL . Conduct EDA (exploratory data analysis) to identify patterns and anomalies. Collaboration: Work closely with data engineers, product managers, and business stakeholders to define data problems and deliver scalable solutions. Present model results and insights to both technical and non-technical audiences. Continuous Improvement: Stay updated on the latest advancements in machine learning, AI, and cloud technologies. Suggest and implement best practices for experimentation, model governance, and documentation. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or related field. 4+ years of hands-on experience in data science, machine learning, or applied AI roles. Proficiency in Python for data analysis, model development, and scripting. Strong SQL skills for querying and manipulating large datasets. Hands-on experience with AWS SageMaker , including model training, deployment, and monitoring. Solid understanding of machine learning algorithms and techniques (supervised/unsupervised). Familiarity with libraries such as Pandas, NumPy, Scikit-learn, Matplotlib, and Seaborn. Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

OPENTEXT OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. Your Impact We are seeking a skilled and experienced Software Engineer with expertise in Large Language Models (LLM), Java, Python, Kubernetes, Helm and cloud technologies like AWS. The ideal candidate will contribute to designing, developing, and maintaining scalable software solutions using microservices architecture. This role offers an exciting opportunity to work with cutting-edge technologies in a collaborative environment. What The Role Offers Design, develop, troubleshoot and debug software programs for software enhancements and new products. Integrate Large Language Models (LLMs) into business applications to enhance functionality and user experience. Develop and maintain transformer-based models. Develop RESTful APIs and ensure seamless integration across services. Collaborate with cross-functional teams to gather requirements and translate them into technical solutions. Implement best practices for cloud-native development using AWS services like EC2, Lambda, SageMaker, S3 etc. Deploy, manage, and scale containerized applications using Kubernetes (K8S) and Helm. Designs enhancements, updates, and programming changes for portions and subsystems of application software, utilities, databases, and Internet-related tools. Analyses design and determines coding, programming, and integration activities required based on general objectives and knowledge of overall architecture of product or solution. Collaborates and communicates with management, internal, and outsourced development partners regarding software systems design status, project progress, and issue resolution. Represents the software systems engineering team for all phases of larger and more-complex development projects. Ensure system reliability, security, and performance through effective monitoring and troubleshooting. Write clean, efficient, and maintainable code following industry standards. Participate in code reviews, mentorship, and knowledge-sharing within the team. What You Need To Succeed Bachelor's or Master's degree in Computer Science, Information Systems, or equivalent. Typically, 3-5 years of experience Strong understanding of Large Language Models (LLM) and experience applying them in real-world applications. Expertise in Elastic Search or similar search and indexing technologies. Expertise in designing and implementing microservices architecture. Solid experience with AWS services like EC2, VPC, ECR, EKS, SageMaker etc. for cloud deployment and management. Proficiency in container orchestration tools such as Kubernetes (K8S) and packaging/deployment tools like Helm. Strong problem-solving skills and the ability to troubleshoot complex issues. Strong experience in Java and Python development, with proficiency in frameworks like Spring Boot or Java EE. Should have good hands-on experience in designing and writing modular object-oriented code. Good knowledge of REST APIs, Spring, Spring boot, Hibernate. Excellent analytical, troubleshooting and problem-solving skills. Ability to demonstrate effective teamwork both within the immediate team and across teams. Experience in working with version control and build tools like GIT, GitLab, Maven and Jenkins, GitLab CI. Excellent communication and collaboration skills. Familiarity with Python for LLM-related tasks. Working knowledge in RAG Experience working with NLP frameworks such as Hugging Face, OpenAI, or similar. Knowledge of database systems like PostgreSQL, MongoDB, or DynamoDB. Experience with observability tools like Prometheus, Grafana, or ELK Stack. Experience in working with event-driven architectures and messaging systems (e.g., Kafka, RabbitMQ). Experience with CI/CD pipelines, DevOps practices, and infrastructure as code (e.g., Terraform, CloudFormation). Familiar with Agile framework/SCRUM development methodologies One Last Thing OpenText is more than just a corporation, it's a global community where trust is foundational, the bar is raised, and outcomes are owned. Join us on our mission to drive positive change through privacy, technology, and collaboration. At OpenText, we don't just have a culture; we have character. Choose us because you want to be part of a company that embraces innovation and empowers its employees to make a difference. OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace. 46998 Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

What is Blend Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com. What is the Role We are seeking a highly skilled MLOps Engineer with an overall experience 5 years with 3 years as ML Engineer particularly in building and managing ML pipelines in AWS. The ideal candidate has successfully built and deployed at least two MLOps projects using Amazon SageMaker or similar services, with a strong foundation in infrastructure as code and a keen understanding of MLOps best practices. What you’ll be doing? Maintain and enhance existing ML pipelines in AWS with a focus on Infrastructure as Code using CloudFormation. Implement minimal but essential pipeline extensions to support ongoing data science workstreams. Document infrastructure usage, architecture, and design using tools like Confluence, GitHub Wikis, and system diagrams. Act as the internal infrastructure expert, collaborating with data scientists to guide and support model deployments. Research and implement optimization strategies for ML workflows and infrastructure. Work independently and collaboratively with cross-functional teams to support ML product What do we need from you? 5+ years of hands-on DevOps experience with AWS Cloud. Proven experience with at least two MLOps projects deployed using SageMaker or similar AWS services. Strong proficiency in AWS services: SageMaker, ECR, S3, Lambda, Step Functions. Expertise in Infrastructure as Code using CloudFormation for dev/test/prod environments. Solid understanding of MLOps best practices and Data Science principles. Proficient in Python for scripting and automation. Experience building and managing Docker images. Hands-on experience with Git-based version control systems such as AWS CodeCommit or GitHub, including GitHub Actions for CI/CD pipelines. What do you get in return? Competitive Salary : Your skills and contributions are highly valued here, and we make sure your salary reflects that, rewarding you fairly for the knowledge and experience you bring to the table. Dynamic Career Growth : Our vibrant environment offers you the opportunity to grow rapidly, providing the right tools, mentorship, and experiences to fast-track your career. Idea Tanks : Innovation lives here. Our "Idea Tanks" are your playground to pitch, experiment, and collaborate on ideas that can shape the future. Growth Chats : Dive into our casual "Growth Chats" where you can learn from the best whether it's over lunch or during a laid-back session with peers, it's the perfect space to grow your skills. Snack Zone : Stay fueled and inspired! In our Snack Zone, you'll find a variety of snacks to keep your energy high and ideas flowing. Recognition & Rewards : We believe great work deserves to be recognized. Expect regular Hive-Fives, shoutouts and the chance to see your ideas come to life as part of our reward program. Fuel Your Growth Journey with Certifications : We’re all about your growth groove! Level up your skills with our support as we cover the cost of your certifications. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Us All people need connectivity. The Rakuten Group is reinventing telecom by greatly reducing cost, rewarding big users not penalizing them, empowering more people and leading the human centric AI future. The mission is to connect everybody and enable all to be. Rakuten. Telecom Invented. Job Description Job Title: Senior AI/ML Engineer (Generative AI & ML Ops) Minimum 8+ years of experience in AI/ML development. Location : Bangalore Required Skills And Expertise AI/ML Strategy and Leadership: Define the AI/ML strategy and roadmap aligned with the product vision. Identify and prioritize AI/ML use cases, including classical ML, Generative AI, and Agentic AI, relevant to our product offerings. Build and lead a high-performing AI/ML team by mentoring and upskilling existing non-AI/ML team members. Stay updated with the latest advancements in AI/ML, generative AI, Agentic AI and ML Ops, and apply them to solve business problems. AI/ML Development: Proficient in Python programming and libraries like NumPy, Pandas, Scikit-learn, and Matplotlib. Strong understanding of machine learning algorithms, deep learning architectures, and generative AI models. Design, develop, and deploy classical machine learning models, including supervised, unsupervised, and reinforcement learningtechniques. Hands on Experience in AI/ML Framework like: Scikit-learn, XG Boost, TensorFlow, Py Torch, Keras, Hugging Face, OpenAI APIs (e.g., GPT models). Experience with feature engineering, model evaluation, and hyperparameter tuning. Experience with Lang Chain modules, including Chains, Memory, Tools, and Agents. Build and fine-tune generative AI models (e.g., GPT, DALL-E, Stable Diffusion) for specific use cases. Must Have Exp. in any Agentic AI framework. Leverage LLMs (e.g., GPT, Claude, LLaMA) and multi-modal models to build intelligent agents that can interact with users and systems. Design and develop autonomous AI agents capable of reasoning, planning, and executing tasks in dynamic environments. Implement prompt engineering and fine-tuning of LLMs to optimize agent behaviour for specific tasks. Optimize models for performance, scalability, and cost-efficiency. Job Requirement MLOps And DevOps For AI/ML Establish and maintain an end-to-end MLOps pipeline for model development, deployment, monitoring, and retraining. Automate model training, testing, and deployment workflows using CI/CD pipelines. Implement robust version control for datasets, models, and code. Monitor model performance in production and implement feedback loops for continuous improvement. Proficiency in MLOps tools and platforms such as MLflow, Kubeflow, TFX, SageMaker. Familiarity with CI/CD tools like Jenkins, GitHub Actions, or GitLab CI for AI/ML workflows. Expertise in deploying models on cloud platforms (AWS, Azure, GCP) Show more Show less

Posted 1 week ago

Apply

5.0 years

0 - 0 Lacs

India

On-site

Role Overview We’re looking for a hands-on Gen AI Architect with deep expertise in AWS Bedrock Agents and RAG chatbots to lead our AI development team in India. This is an opportunity to be among the first technical leaders at ncyclo.ai, contributing to our growth into a billion-dollar company. Key Responsibilities Solution Architecture Architect and implement AWS Bedrock Agent-based solutions, including agent orchestration, multi-turn interactions, and secure enterprise integration. Design and build RAG chatbots leveraging AWS Bedrock, LangChain, and vector stores for high-quality, contextual responses. Hands-on Development Develop production-ready Gen AI solutions, including prompt engineering, agent workflows, retrieval pipelines, and LLM-based generative components. Implement AWS-native solutions integrating Bedrock, Lambda, API Gateway, DynamoDB, S3, and related services. Classic ML Expertise Design and deploy classic ML models using AWS SageMaker. Build and manage ML pipelines with proper data preprocessing, model training, evaluation, and deployment. Technical Leadership Lead a team of AI engineers and data scientists, driving technical excellence in Gen AI and ML delivery. Define and enforce MLOps, CI/CD, and model governance frameworks for secure and scalable deployments. Client Engagement Collaborate with US and global clients to understand business requirements and translate them into technical architectures and actionable plans. Support pre-sales activities, including solution demos, technical proposals, and effort estimation. Required Qualifications Education: Bachelor’s degree in Computer Science, Data Science, Engineering, or a related field. Experience: 5+ years of AI/ML experience, with at least 2+ years in Generative AI. Proven expertise in AWS Bedrock Agents—designing and deploying production-grade agent-based solutions. Hands-on experience with RAG architectures, vector databases, embeddings, LangChain, and retrieval pipelines. Strong background in classic ML. Skills: Proficiency in Python (PyTorch/TensorFlow, scikit-learn). Expertise in prompt engineering, vector databases (e.g., Pinecone, FAISS, AWS OpenSearch), and embeddings. Deep knowledge of AWS services (SageMaker, Lambda, API Gateway, DynamoDB, S3, IAM, CloudFormation). Soft Skills: Strong communication skills for client engagement. Leadership and mentoring experience. Self-starter with a collaborative, problem-solving mindset. Preferred Qualifications AWS Certifications (e.g., AWS Certified Machine Learning – Specialty, AWS Solutions Architect). Experience with MLOps frameworks and CI/CD pipelines on AWS. Knowledge of multi-tenant AI architectures and secure AWS deployments. Job Types: Full-time, Permanent Pay: ₹15,000.00 - ₹20,000.00 per month Benefits: Health insurance Leave encashment Provident Fund Schedule: Day shift Work Location: On the road

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About the Role We are seeking a highly skilled and versatile Senior AI Engineer with over 5 years of hands-on experience to join our client’s team in Pune. This role focuses on designing, developing, and deploying cutting-edge AI and machine learning solutions for high-scale, high-concurrency applications where security, scalability, and performance are paramount. You will work closely with cross-functional teams, including data scientists, DevOps engineers, security specialists, and business stakeholders, to deliver robust AI solutions that drive measurable business impact in dynamic, large-scale environments. Key Responsibilities Architect, develop, and deploy advanced machine learning and deep learning models across domains like NLP, computer vision, predictive analytics, or reinforcement learning, ensuring scalability and performance under high-traffic conditions. Preprocess, clean, and analyze large-scale structured and unstructured datasets using advanced statistical, ML, and big data techniques. Collaborate with data engineering and DevOps teams to integrate AI/ML models into production-grade pipelines, ensuring seamless operation under high concurrency. Optimize models for latency, throughput, accuracy, and resource efficiency, leveraging distributed computing and parallel processing where necessary. Implement robust security measures, including data encryption, secure model deployment, and adherence to compliance standards (e.g., GDPR, CCPA). Partner with client-side technical teams to translate complex business requirements into scalable, secure AI-driven solutions. Stay at the forefront of AI/ML advancements, experimenting with emerging tools, frameworks, and techniques (e.g., generative AI, federated learning, or AutoML). Write clean, modular, and maintainable code, along with comprehensive documentation and reports for model explainability, reproducibility, and auditability. Proactively monitor and maintain deployed models, ensuring reliability and performance in production environments with millions of concurrent users. Required Qualifications Bachelor’s or master’s degree in computer science, Machine Learning, Data Science, or a related technical field. 3 to 5 years of experience building and deploying AI/ML models in production environments with high-scale traffic and concurrency. Advanced proficiency in Python and modern AI/ML frameworks, including TensorFlow, PyTorch, Scikit-learn, and JAX. Hands-on expertise in at least two of the following domains: NLP, computer vision, time-series forecasting, or generative AI. Deep understanding of the end-to-end ML lifecycle, including data preprocessing, feature engineering, hyperparameter tuning, model evaluation, and deployment. Proven experience with cloud platforms (AWS, GCP, or Azure) and their AI/ML services (e.g., SageMaker, Vertex AI, or Azure ML). Strong knowledge of containerization (Docker, Kubernetes) and RESTful API development for secure and scalable model deployment. Familiarity with secure coding practices, data privacy regulations, and techniques for safeguarding AI systems against adversarial attacks. Preferred Skills Expertise in MLOps frameworks and tools such as MLflow, Kubeflow, or SageMaker for streamlined model lifecycle management. Hands-on experience with large language models (LLMs) or generative AI frameworks (e.g., Hugging Face Transformers, LangChain, or Llama). Proficiency in big data technologies and orchestration tools (e.g., Apache Spark, Airflow, or Kafka) for handling massive datasets and real-time pipelines. Experience with distributed training techniques (e.g., Horovod, Ray, or TensorFlow Distributed) for large-scale model development. Knowledge of CI/CD pipelines and infrastructure-as-code tools (e.g., Terraform, Ansible) for scalable and automated deployments. Familiarity with security frameworks and tools for AI systems, such as model hardening, differential privacy, or encrypted computation. Proven ability to work in global, client-facing roles, with strong communication skills to bridge technical and business teams. Show more Show less

Posted 1 week ago

Apply

0.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Company Description BRAINWONDERS About Company: At Brainwonders, we are proud to be India’s largest career counselling and guidance company, recognized for our commitment to transforming students' futures. With 1223+ educational institutes using our services, 93+ corporate connections, and 108+ franchisees, we have built an expansive network of support for students, educators, and professionals. Brainwonders has earned numerous national and regional awards for excellence in career counselling and guidance, and is consistently rated as one of the highest-paying employers in the counselling industry by various job portals. Job Description AI Engineer Intern – LLMs, Agents, AWS Position: AI Engineer Intern Experience: 0-1 Year (Final-year students or freshers welcome) Location: Borivali East, Mumbai Type: Internship (3–6 months) | Possibility of Full-time Offer Focus Areas: Large Language Models (LLMs) AI Agents / Autonomous Systems ML Fundamentals & Transformers Cloud Deployment & Scaling (AWS) Responsibilities: Build and test LLM-based applications and workflows. Experiment with agent frameworks (e.g., LangGraph, CrewAI). Use and adapt transformer models (e.g., LLaMA, Mistral) for domain tasks. Deploy models and services on AWS (Lambda, EC2, SageMaker). Work with vector stores and embeddings for search or RAG-based systems. Collaborate on prompt engineering, fine-tuning, and pipeline optimization. ✅ Requirements: Solid understanding of LLMs, tokenization, transformers. Python programming (Pandas, NumPy, Hugging Face, LangChain, etc.). Exposure to AWS services like EC2, Lambda, S3, or SageMaker. Eagerness to explore real-world AI use cases (agents, copilots, RAG). Basic ML knowledge and hands-on project experience. Good to know: Experience with serverless deployments on AWS. Familiarity with monitoring and logging (e.g., CloudWatch, OpenTelemetry). Contributions to open-source AI tools or models Additional Information Show more Show less

Posted 1 week ago

Apply

2.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description In this Role, Your Responsibilities Will Be: Determine coding design requirements from function and detailed specification o Analyze software bugs and affect code repairs Design, develop, and deliver specified software features Produce usable documentation and test procedures Deal directly with the end clients to assist in software validation and deployment Explore and evaluate opportunities to integrate AI/ML capabilities into the LMS suite, particularly for predictive analytics, optimization, and automation. Who You Are: You quickly and decisively act in constantly evolving, unexpected situations. You adjust communication content and style to meet the needs of diverse partners. You always keep the end in sight; puts in extra effort to meet deadlines. You analyze multiple and diverse sources of information to define problems accurately before moving to solutions. You observe situational and group dynamics and select best-fit approach. For This Role, You Will Need: BS in Computer Science, Engineering, Mathematics or technical equivalent 2 to 7 years of experience required. Strong problem solving skills Strong Programming Skills (.NET stack, C#, ASP.NET, Web Development technologies, HTML/5, Javascript, WCF, MS SQLServer Transact-SQL). Strong communication skills (client facing). Flexibility to work harmoniously with a small development team. Familiarity with AI/ML concepts and techniques, including traditional machine learning algorithms (e.g., regression, classification, clustering) and modern Large Language Models (LLMs). Experience with machine learning libraries and frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Experience in developing and deploying machine learning models. Understanding of data preprocessing, feature engineering, and model evaluation techniques. Preferred Qualifications that Set You Apart: Experience with liquid pipeline operations or volumetric accounting a plus Knowledge of oil and gas pipeline industry, also a plus. Experience with cloud-based AI/ML services (e.g., Azure Machine Learning, AWS SageMaker, Google Cloud AI Platform) is a plus. Our Culture & Commitment to You At Emerson, we prioritize a workplace where every employee is valued, respected, and empowered to grow. We foster an environment that encourages innovation, collaboration, and diverse perspectives—because we know that great ideas come from great teams. Our commitment to ongoing career development and growing an inclusive culture ensures you have the support to thrive. Whether through mentorship, training, or leadership opportunities, we invest in your success so you can make a lasting impact. We believe diverse teams, working together are key to driving growth and delivering business results. We recognize the importance of employee wellbeing. We prioritize providing competitive benefits plans, a variety of medical insurance plans, Employee Assistance Program, employee resource groups, recognition, and much more. Our culture offers flexible time off plans, including paid parental leave (maternal and paternal), vacation and holiday leave. About Us WHY EMERSON Our Commitment to Our People At Emerson, we are motivated by a spirit of collaboration that helps our diverse, multicultural teams across the world drive innovation that makes the world healthier, safer, smarter, and more sustainable. And we want you to join us in our bold aspiration. We have built an engaged community of inquisitive, dedicated people who thrive knowing they are welcomed, trusted, celebrated, and empowered to solve the world’s most complex problems — for our customers, our communities, and the planet. You’ll contribute to this vital work while further developing your skills through our award-winning employee development programs. We are a proud corporate citizen in every city where we operate and are committed to our people, our communities, and the world at large. We take this responsibility seriously and strive to make a positive impact through every endeavor. At Emerson, you’ll see firsthand that our people are at the center of everything we do. So, let’s go. Let’s think differently. Learn, collaborate, and grow. Seek opportunity. Push boundaries. Be empowered to make things better. Speed up to break through. Let’s go, together. Accessibility Assistance or Accommodation If you have a disability and are having difficulty accessing or using this website to apply for a position, please contact: idisability.administrator@emerson.com . About Emerson Emerson is a global leader in automation technology and software. Through our deep domain expertise and legacy of flawless execution, Emerson helps customers in critical industries like life sciences, energy, power and renewables, chemical and advanced factory automation operate more sustainably while improving productivity, energy security and reliability. With global operations and a comprehensive portfolio of software and technology, we are helping companies implement digital transformation to measurably improve their operations, conserve valuable resources and enhance their safety. We offer equitable opportunities, celebrate diversity, and embrace challenges with confidence that, together, we can make an impact across a broad spectrum of countries and industries. Whether you’re an established professional looking for a career change, an undergraduate student exploring possibilities, or a recent graduate with an advanced degree, you’ll find your chance to make a difference with Emerson. Join our team – let’s go! No calls or agencies please. Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are seeking an experienced Devops/ AIOps Architect to design, architect, and implement an AI-driven operations solution that integrates various cloud-native services across AWS, Azure, and cloud-agnostic environments. The AIOps platform will be used for end-to-end machine learning lifecycle management, automated incident detection, and root cause analysis (RCA). The architect will lead efforts in developing a scalable solution utilizing data lakes, event streaming pipelines, ChatOps integration, and model deployment services. This platform will enable real-time intelligent operations in hybrid cloud and multi-cloud setups. Responsibilities Assist in the implementation and maintenance of cloud infrastructure and services Contribute to the development and deployment of automation tools for cloud operations Participate in monitoring and optimizing cloud resources using AIOps and MLOps techniques Collaborate with cross-functional teams to troubleshoot and resolve cloud infrastructure issues Support the design and implementation of scalable and reliable cloud architectures Conduct research and evaluation of new cloud technologies and tools Work on continuous improvement initiatives to enhance cloud operations efficiency and performance Document cloud infrastructure configurations, processes, and procedures Adhere to security best practices and compliance requirements in cloud operations Requirements Bachelor’s Degree in Computer Science, Engineering, or related field 12+ years of experience in DevOps roles, AIOps, OR Cloud Architecture Hands-on experience with AWS services such as SageMaker, S3, Glue, Kinesis, ECS, EKS Strong experience with Azure services such as Azure Machine Learning, Blob Storage, Azure Event Hubs, Azure AKS Strong experience with Infrastructure as Code (IAC)/ Terraform/ Cloud formation Proficiency in container orchestration (e.g., Kubernetes) and experience with multi-cloud environments Experience with machine learning model training, deployment, and data management across cloud-native and cloud-agnostic environments Expertise in implementing ChatOps solutions using platforms like Microsoft Teams, Slack, and integrating them with AIOps automation Familiarity with data lake architectures, data pipelines, and inference pipelines using event-driven architectures Strong programming skills in Python for rule management, automation, and integration with cloud services Nice to have Any certifications in the AI/ ML/ Gen AI space Show more Show less

Posted 1 week ago

Apply

20.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Staff AI Engineer MLOps About The Team The AI Center of Excellence team includes Data Scientists and AI Engineers that work together to conduct research, build prototypes, design features and build production AI components and systems. Our mission is to leverage the best available technology to protect our customers' attack surfaces. We partner closely with Detection and Response teams, including our MDR service, to leverage AI/ML for enhanced customer security and threat detection. We operate with a creative, iterative approach, building on 20+ years of threat analysis and a growing patent portfolio. We foster a collaborative environment, sharing knowledge, developing internal learning, and encouraging research publication. If you’re passionate about AI and want to make a major impact in a fast-paced, innovative environment, this is your opportunity. The Technologies We Use Include AWS for hosting our research environments, data, and features (i.e. Sagemaker, Bedrock) EKS to deploy applications Terraform to manage infrastructure Python for analysis and modeling, taking advantage of numpy and pandas for data wrangling. Jupyter notebooks (locally and remotely hosted) as a computational environment Sci-kit learn for building machine learning models Anomaly detection methods to make sense of unlabeled data About The Role Rapid7 is seeking a Staff AI Engineer to join our team as we expand and evolve our growing AI and MLOps efforts. You should have a strong foundation in software engineering, and MLOps and DevOps systems and tools. Further, you’ll have a demonstrated track record of taking models created in the AI R&D process to production with repeatable deployment, monitoring and observability patterns. In this intersectional role, you will combine your expertise in AI/ML deployments, cloud systems and software engineering to enhance our product offerings and streamline our platform's functionalities. In This Role, You Will Design and build ML production systems, including project scoping, data requirements, modeling strategies, and deployment Develop and maintain data pipelines, manage the data lifecycle, and ensure data quality and consistency throughout Assure robust implementation of ML guardrails and manage all aspects of service monitoring Develop and deploy accessible endpoints, including web applications and REST APIs, while maintaining steadfast data privacy and adherence to security best practices and regulations Share expertise and knowledge consistently with internal and external stakeholders, nurturing a collaborative environment and fostering the development of junior engineers Embrace agile development practices, valuing constant iteration, improvement, and effective problem-solving in complex and ambiguous scenarios The Skills You’ll Bring Include 8-12 years experience as a Software Engineer, with at least 3 years focused on gaining expertise in ML deployment (especially in AWS) Solid technical experience in the following is required: Software engineering: developing APIs with Flask or FastAPI, paired with strong Python knowledge DevOps and MLOps: Designing and integrating scalable AI/ML systems into production environments, CI/CD tooling, Docker, Kubernetes, cloud AI resource utilization and management Pipelines, monitoring, and observability: Data pre-processing and feature engineering, model monitoring and evaluation A growth mindset - welcoming the challenge of tackling complex problems with a bias for action Strong written and verbal communication skills - able to effectively communicate technical concepts to diverse audiences and creating clear documentation of system architectures and implementation details Proven ability to collaborate effectively across engineering, data science, product, and other teams to drive successful MLOps initiatives and ensure alignment on goals and deliverables. Experience With The Following Would Be Advantageous Experience with Java programming Experience in the security industry AI and ML models, understanding their operational frameworks and limitations Familiarity with resources that enable data scientists to fine tune and experiment with LLMs Knowledge of or experience with model risk management strategies, including model registries, concept/covariate drift monitoring, and hyperparameter tuning We know that the best ideas and solutions come from multi-dimensional teams. That’s because these teams reflect a variety of backgrounds and professional experiences. If you are excited about this role and feel your experience can make an impact, please don’t be shy - apply today. About Rapid7 At Rapid7, we are on a mission to create a secure digital world for our customers, our industry, and our communities. We do this by embracing tenacity, passion, and collaboration to challenge what’s possible and drive extraordinary impact. Here, we’re building a dynamic workplace where everyone can have the career experience of a lifetime. We challenge ourselves to grow to our full potential. We learn from our missteps and celebrate our victories. We come to work every day to push boundaries in cybersecurity and keep our 10,000 global customers ahead of whatever’s next. Join us and bring your unique experiences and perspectives to tackle some of the world’s biggest security challenges. Show more Show less

Posted 1 week ago

Apply

3.0 - 4.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

Job Title: Data Scientist – Computer Vision & Generative AI Location: Mumbai Experience Level: 3 to 4 years Employment Type: Full-time Industry: Renewable Energy / Solar Services Job Overview: We are seeking a talented and motivated Data Scientist with a strong focus on computer vision, generative AI, and machine learning to join our growing team in the solar services sector. You will play a pivotal role in building AI-driven solutions that transform how solar infrastructure is analyzed, monitored, and optimized using image-based intelligence. From drone and satellite imagery to on-ground inspection photos, your work will enable intelligent automation, predictive analytics, and visual understanding in critical areas like fault detection, panel degradation, site monitoring, and more. If you're passionate about working at the cutting edge of AI for real-world sustainability impact, we’d love to hear from you. Key Responsibilities: Design, develop, and deploy computer vision models for tasks such as object detection, classification, segmentation, anomaly detection, etc. Work with generative AI techniques (e.g., GANs, diffusion models) to simulate environmental conditions, enhance datasets, or create synthetic training data. Build ML pipelines for end-to-end model training, validation, and deployment using Python and modern ML frameworks. Analyze drone, satellite, and on-site images to extract meaningful insights for solar panel performance, wear-and-tear detection, and layout optimization. Collaborate with cross-functional teams (engineering, field ops, product) to understand business needs and translate them into scalable AI solutions. Continuously experiment with the latest models, frameworks, and techniques to improve model performance and robustness. Optimize image pipelines for performance, scalability, and edge/cloud deployment. Key Requirements: 3–4 years of hands-on experience in data science, with a strong portfolio of computer vision and ML projects. Proven expertise in Python and common data science libraries: NumPy, Pandas, Scikit-learn, etc. Proficiency with image-based AI frameworks: OpenCV , PyTorch or TensorFlow , Detectron2 , YOLOv5/v8 , MMDetection , etc. Experience with generative AI models like GANs , Stable Diffusion , or ControlNet for image generation or augmentation. Experience building and deploying ML models using MLflow , TorchServe , or TensorFlow Serving . Familiarity with image annotation tools (e.g., CVAT, Labelbox), and data versioning tools (e.g., DVC). Experience with cloud platforms ( AWS , GCP , or Azure ) for storage, training, or model deployment. Experience with Docker , Git , and CI/CD pipelines for reproducible ML workflows. Ability to write clean, modular code and a solid understanding of software engineering best practices in AI/ML projects. Strong problem-solving skills, curiosity, and ability to work independently in a fast-paced environment. Bonus / Preferred Skills: Experience with remote sensing and working with satellite or drone imagery. Exposure to MLOps practices and tools like Kubeflow , Airflow , or SageMaker Pipelines . Knowledge of solar technologies, photovoltaic systems, or renewable energy is a plus. Familiarity with edge computing for vision applications on IoT devices or drones. Application Instructions: Please submit your resume, portfolio (GitHub, blog, or project links), and a short cover letter explaining why you’re interested in this role to khushboo.b@solarsquare.in or sidhant.c@solarsquare.in Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

Mysuru, Karnataka, India

On-site

Linkedin logo

About ISOCRATES Since 2015, iSOCRATES advises on, builds and manages mission-critical Marketing, Advertising and Data technologies, platforms, and processes as the Global Leader in MADTECH Resource Planning and Execution(TM). iSOCRATES delivers globally proven, reliable, and affordable Strategy and Operations Consulting and Managed Services for marketers, agencies, publishers, and the data/tech providers that enable them. iSOCRATES is staffed 24/7/365 with its proven specialists who save partners money, and time and achieve transparent, accountable, performance while delivering extraordinary value. Savings stem from a low-cost, focused global delivery model at scale that benefits from continuous re-investment in technology and specialized training. About MADTECH.AI MADTECH.AI is the Unified Marketing, Advertising, and Data Decision Intelligence Platform purpose-built to deliver speed to value for marketers. At MADTECH.AI, we make real-time AI-driven insights accessible to everyone. Whether you’re a global or emerging brand, agency, publisher, or data/tech provider, we give you a single source of truth - so you can capture sharper insights that drive better marketing decisions faster and more affordable than ever before. MADTECH.AI unifies and transforms MADTECH data and centralizes decision intelligence in a single, affordable platform. Leave data wrangling, data model building, proactive problem solving, and data visualization to MADTECH.AI. Job Description We are seeking a highly skilled, results-oriented Product Manager - AI & BI to lead the growth and development of iSOCRATES' MADTECH.AI™ platform. As a core member of the product team, you will play an instrumental role in shaping the future of our AI-powered Marketing, Advertising, and Data Decision Intelligence solutions. Your focus will be on driving innovation in AI and BI capabilities, ensuring that our product meets the evolving needs of our B2B customers and enhances their marketing and data analytics capabilities. Key Responsibilities Product Strategy & Roadmap Development: Lead the creation and execution of the MADTECH.AI™ product roadmap, with a focus on incorporating AI and BI technologies to deliver value for B2B customers. Collaborate with internal stakeholders to define product features, prioritize enhancements, and ensure alignment with iSOCRATES’ long-term business objectives. AI & BI Product Development: Spearhead the design and development of innovative AI and BI features to enhance the MADTECH.AI™ platform’s scalability, functionality, and user experience. Leverage cutting-edge technologies such as machine learning, predictive analytics, natural language processing (NLP), data visualization, reinforcement learning, and other advanced AI techniques to deliver powerful marketing, advertising, and data decision intelligence solutions. Cross-Functional Collaboration: Collaborate with cross-functional teams, including engineering, design, marketing, sales, and customer success, to ensure seamless product development and delivery. Facilitate communication between technical and business teams to ensure product features align with customer needs and market trends. Customer & Market Insights: Engage with customers and other stakeholders to gather feedback, identify pain points, and stay on top of market trends. Use this data to shape product development and enhance MADTECH.AI™ capabilities, ensuring they are well-positioned in the evolving market landscape. Product Lifecycle Management: Oversee the complete product lifecycle from ideation through launch and beyond. Manage ongoing iterations of the product based on customer feedback and performance metrics to ensure that MADTECH.AI™ remains competitive and meets user expectations. Data-Driven Decision Making: Use customer analytics, usage patterns, and performance data to inform key product decisions. Define success metrics, monitor product performance, and make adjustments as needed to drive product success. AI/BI Thought Leadership: Stay current on the latest trends in AI, BI, MarTech, and AdTech. Act as a thought leader both internally and externally to position iSOCRATES as an innovator in the MADTECH.AI space. Promote best practices and contribute to the company’s overall strategy for AI and BI product development. Qualifications & Skills Bachelor's or Master's degree in Computer Science, Engineering, Data Science, Business, or a related field. At least 12 years of experience in product management, with a minimum of 7 years focused on B2B SaaS solutions and strong expertise in AI and BI technologies. Prior experience in marketing, advertising, or data analytics platforms is highly preferred. AI & BI Expertise: Deep understanding of Artificial Intelligence, Machine Learning, Natural Language Processing (NLP), Predictive Analytics, Data Visualization, Business Intelligence tools (e.g., Tableau, Power BI, Qlik), and their application in SaaS products, especially within the context of MarTech, AdTech, or DataTech. AI Tools and Technologies : Hands-on experience with AI and BI tools such as: Data Science Libraries: TensorFlow, PyTorch, Scikit-learn, Keras, XGBoost, LightGBM, Hugging Face Transformers, CatBoost, H2O.ai BI Platforms: Tableau, Power BI, Qlik, Looker, Domo, Sisense, MicroStrategy Machine Learning Tools: Azure ML, Google AI Platform, AWS Sagemaker, Databricks, H2O.ai, Vertex AI Data Analytics Tools: Apache Hadoop, Apache Spark, Apache Flink, SQL-based tools, dbt, Snowflake Data Visualization Tools: D3.js, Plotly, Matplotlib, Seaborn, Chart.js, Superset Cloud-Based AI Services: Google AI, AWS AI/ML services, IBM Watson, Microsoft Azure Cognitive Services, Oracle Cloud AI Emerging Tools: AutoML platforms, MLOps tools, Explainable AI (XAI) tools Product Development: Proven experience in leading AI/BI-driven product development within SaaS platforms, including managing the full product lifecycle, from ideation to launch and post-launch iterations. Agile Methodology: Experience working in Agile product development environments, with the ability to prioritize and manage multiple initiatives and product features simultaneously. Analytical & Data-Driven: Strong analytical skills with a focus on leveraging data, performance metrics, and customer feedback to inform product decisions. Ability to translate complex data into actionable insights. Customer-Centric: Experience in working directly with customers to understand their needs, pain points, and feedback. A customer-first mindset with a focus on building products that provide measurable value. Excellent Communication Skills: Exceptional communication, presentation, and interpersonal skills, with the ability to engage and influence both technical teams and business stakeholders across different geographies. Industry Knowledge: Familiarity with MADTECH.AI platforms and technologies. Understanding of customer journey analytics, predictive analytics, and decision intelligence platforms like MADTECH.AI™ is a plus. Cloud & SaaS Architecture: Familiarity with cloud-based solutions and large-scale SaaS architecture. Understanding of how AI and BI features integrate with cloud infrastructure is beneficial. Experience with AI-powered decision intelligence platforms like MADTECH.AI™ or similar MarTech, AdTech, or DataTech tools. In-depth knowledge of cloud technologies, including AWS, Azure, or Google Cloud, and their integration with SaaS platforms. Exposure to customer journey analytics, predictive analytics, and other advanced AI/BI tools. Willingness to work from Mysore/Bangalore or travel to Mysore as per business requirement. Show more Show less

Posted 1 week ago

Apply

150.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

You are as unique as your background, experience and point of view. Here, you’ll be encouraged, empowered and challenged to be your best self. You'll work with dynamic colleagues - experts in their fields - who are eager to share their knowledge with you. Your leaders will inspire and help you reach your potential and soar to new heights. Every day, you'll have new and exciting opportunities to make life brighter for our Clients - who are at the heart of everything we do. Discover how you can make a difference in the lives of individuals, families and communities around the world. Job Description: Principal Consultant -DevOps Are you ready to shine? At Sun Life, we empower you to be your most brilliant self. Who we are? Sun Life is a leading financial services company with history of 150+ years that helps our clients achieve lifetime financial security and live healthier lives. We serve millions in Canada, the U.S., Asia, the U.K., and other parts of the world. We have a network of Sun Life advisors, third-party partners, and other distributors. Through them, we’re helping set our clients free to live their lives their way, from now through retirement. We’re working hard to support their wellness and health management goals, too. That way, they can enjoy what matters most to them. And that’s anything from running a marathon to helping their grandchildren learn to ride a bike. To do this, we offer a broad range of protection and wealth products and services to individuals, businesses, and institutions, including: Insurance. Life, health, wellness, disability, critical illness, stop-loss, and long-term care insurance. Investments. Mutual funds, segregated funds, annuities, and guaranteed investment products Advice. Financial planning and retirement planning services Asset management. Pooled funds, institutional portfolios, and pension funds With innovative technology, a strong distribution network and long-standing relationships with some of the world’s largest employers, we are today providing financial security to millions of people globally. Sun Life is a leading financial services company that helps our clients achieve lifetime financial security and live healthier lives, with strong insurance, asset management, investments, and financial advice portfolios. At Sun Life, our asset management business draws on the talent and experience of professionals from around the globe. Sun Life Global Solutions (SLGS) Established in the Philippines in 1991 and in India in 2006, Sun Life Global Solutions, (formerly Asia Service Centres), a microcosm of Sun Life, is poised to harness the regions’ potential in a significant way - from India and the Philippines to the world. We are architecting and executing a BOLDER vision: being a Digital and Innovation Hub, shaping the Business, driving Transformation and superior Client experience by providing expert Technology, Business and Knowledge Services and advanced Solutions. We help our clients achieve lifetime financial security and live healthier lives – our core purpose and mission. Drawing on our collaborative and inclusive culture, we are reckoned as a ‘Great Place to Work’, ‘Top 100 Best Places to Work for Women’ and stand among the ‘Top 11 Global Business Services Companies’ across India and the Philippines. The technology function at Sun Life Global Solutions is geared towards growing our existing business, deepening our client understanding, managing new-age technology systems, and demonstrating thought leadership. We are committed to building greater domain expertise and engineering ability, delivering end to end solutions for our clients, and taking a lead in intelligent automation. Tech services at Sun Life Global Solutions have evolved in areas such as application development and management, Support, Testing, Digital, Data Engineering and Analytics, Infrastructure Services and Project Management. We are constantly expanding our strength in Information technology and are looking for fresh talents who can bring ideas and values aligning with our Digital strategy. Our Client Impact strategy is motivated by the need to create an inclusive culture, empowered by highly engaged people. We are entering a new world that focuses on doing purpose driven work. The kind that fills your day with excitement and determination, because when you love what you do, it never feels like work. We want to create an environment where you feel empowered to act and are surrounded by people who challenge you, support you and inspire you to become the best version of yourself. As an employer, we not only want to attract top talent, but we want you to have the best Sun Life Experience. We strive to Shine Together, Make Life Brighter & Shape the Future! What will you do? You will help implement automation, security, and speed of delivery solutions across Sun Life and act as a change agent for the adoption of a DevOps mindset. You will coach and mentor teams, IT leaders and business leaders and create and maintain ongoing learning journeys. You will play a critical role in supporting and guiding DevOps Engineers and technical leaders to ensure that DevOps practices are employed globally. You will act as a role model by demonstrating the right mindset including a test and learn attitude, a bias for action, a passion to innovate and a willingness to learn. You will lead a team of highly skilled and collaborative individuals and will lead new hire on-boarding, talent development, retention, and succession planning. Our engineering career framework helps our engineers to understand the scope, collaborative reach, and levers for impact at every job role and defines the key behaviors and deliverables specific to one’s role and team and plan their career with Sun Life. Your scope of work / key responsibilities: Analyze, investigate, and recommend solutions for continuous improvements, process enhancements, identify pain points, and more efficient workflows. Create templates, standards, and models to facilitate future implementations and adjust priorities when necessary. Demonstrate that you are a collaborative communicator with architects, designers, business system analysts, application analysts, operation teams and testing specialists to deliver fully automated ALM systems. Confidently speaking up, bringing people together, facilitating meetings, recording minutes and actions, and rallying the team towards a common goal Deploy, configure, manage, and perform ongoing maintenance of technical infrastructure including all DevOps tooling used by our Canadian IT squads Set-up and maintain fully automated CI/CD pipeline for multiple Java / .NET environments using tools like Bitbucket, Jenkins, Ansible, Docker etc. Guide development teams with the preparation of releases for production. This may include assisting in the automation of performance tests, validation of infrastructure requirements, and guiding the team with respect to system decisions Create or improve the automated deployment processes, techniques, and tools Troubleshoot and resolve technical operational issues related to IT Infrastructure Review and analyze organizational needs and goals to determine future impacts to applications and systems Ensure information security standards and requirements are incorporated into all solutions Stay current with trends in emerging technologies and how they could apply to Sun Life Key Qualifications and experience: 10+ years of continuous Integration and delivery (CI/CD) experience in a systems development life cycle environment using Bitbucket, Jenkins, CDD, etc. Self sufficient and experienced with either modern programming languages (e.g. Java or C#), or scripting languages such as SageMaker Python, YAML or similar. Working knowledge of SQL, Tableau, Grafana. Advanced knowledge of DevOps with a security and automation mindset Knowledge of using and configuring build tools and orchestration such as Jenkins, SonarQube, Checkmarx, Snyk, Artifactory, Azure DevOps, Docker, Kubernetes, OpenShift, Ansible, Continuous Delivery Director (CDD) Advanced knowledge of deployment (i.e. Ansible, Chef) and containerization (Docker/Kubernetes) tooling IAAS/PAAS/SAAS deployment and operations experience Experience with native mobile development on iOS and/or Android is an asset Experience with source code management tools such as Bitbucket, Git, TFS Technical Credentials: Java/Python , Jenkins , Ansible , Kubernetes ..so on Primary Location: Gurugram/ Bengaluru Schedule: 12:00 PM to 8:30 PM Job Category: IT - Application Development Posting End Date: 29/06/2025 Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Summary We are seeking an experienced AI Solution Architect to lead the design and implementation of scalable AI/ML systems and solutions. The ideal candidate will bridge the gap between business needs and technical execution, translating complex problems into AI-powered solutions that drive strategic value. ⸻ Key Responsibilities Solution Design & Architecture Design end-to-end AI/ML architectures, integrating with existing enterprise systems and cloud infrastructure. Lead technical planning, proof-of-concepts (POCs), and solution prototyping for AI use cases. AI/ML System Development Collaborate with data scientists, engineers, and stakeholders to define solution requirements. Guide the selection and use of machine learning frameworks (e.g., TensorFlow, PyTorch, Hugging Face). Evaluate and incorporate LLMs, computer vision, NLP, and generative AI models as appropriate. Technical Leadership Provide architectural oversight and code reviews to ensure high-quality delivery. Establish best practices for AI model lifecycle management (training, deployment, monitoring). Advocate for ethical and responsible AI practices including bias mitigation and model transparency. Stakeholder Engagement Partner with business units to understand goals and align AI solutions accordingly. Communicate complex AI concepts to non-technical audiences and influence executive stakeholders. Innovation & Strategy Stay updated on emerging AI technologies, trends, and industry standards. Drive innovation initiatives to create competitive advantage through AI. ⸻ Required Qualifications Bachelor’s or Master’s in Computer Science, Engineering, Data Science, or related field. 6 years of experience in software development or data engineering roles. 3+ years of experience designing and delivering AI/ML solutions. Proficiency in cloud platforms (AWS, Azure, GCP) and container orchestration (Kubernetes, Docker). Hands-on experience with MLOps tools (e.g., MLflow, Kubeflow, SageMaker, Vertex AI). Strong programming skills in Python and familiarity with CI/CD pipelines. ⸻ Preferred Qualifications Experience with generative AI, foundation models, or large language models (LLMs). Knowledge of data privacy regulations (e.g., GDPR, HIPAA) and AI compliance frameworks. Certifications in cloud architecture or AI (e.g., AWS Certified Machine Learning, Google Professional ML Engineer). Strong analytical, communication, and project management skills. Show more Show less

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies