Home
Jobs

661 Sagemaker Jobs - Page 17

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

We’re hiring a Senior ML Engineer (MLOps) — 3-5 yrs Location: Kochi or Chennai What you’ll do Tame data → pull, clean, and shape structured & unstructured data. Orchestrate pipelines → Airflow / Step Functions / ADF… your call. Ship models → build, tune, and push to prod on SageMaker, Azure ML, or Vertex AI. Scale → Spark / Databricks for the heavy lifting. Automate everything → Docker, Kubernetes, CI/CD, MLFlow, Seldon, Kubeflow. Pair up → work with engineers, architects, and business folks to solve real problems, fast. What you bring 3+ yrs hands-on MLOps (4-5 yrs total software experience). Proven chops on one hyperscaler (AWS, Azure, or GCP). Confidence with Databricks / Spark , Python, SQL, TensorFlow / PyTorch / Scikit-learn. You debug Kubernetes in your sleep and treat Dockerfiles like breathing. You prototype with open-source first, choose the right tool, then make it scale. Sharp mind, low ego, bias for action. Nice-to-haves Sagemaker, Azure ML, or Vertex AI in production. Love for clean code, clear docs, and crisp PRs. Why Datadivr? Domain focus: we live and breathe F&B — your work ships to plants, not just slides. Small team, big autonomy: no endless layers; you own what you build. 📬 How to apply Shoot your CV + a short note on a project you shipped to careers@datadivr.com or DM me here. We reply to every serious applicant. Know someone perfect? Please share — good people know good people. Show more Show less

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Principal Software Engineer- Backend, Accounting Domain Location: Chennai Work Mode: This role follows a hybrid work model, requiring a minimum of 2 days per week in the office. Minimum educational qualifications : Any UG Toast is driven by building the restaurant platform that helps restaurants adapt, take control, and get back to what they do best: building the businesses they love. Are you bready* for a change? Toast is looking for a Principal Software Engineer to take responsibility for bringing our architecture to the next level and level up the team. As well as providing a cutting edge point of sale system for restaurants, Toast also processes billions of dollars of payments, and offers best-in-class financial service solutions to our customers. As we grow our solutions to meet the needs of our customers, we are also focused on optimizing for extensibility, resilience and scalability, using continuous delivery tools & methodology. Join us to improve our platform and add the next generation of products. About this roll* (Responsibilities) As a Principal Software Engineer on our team, you will: Design and deliver the next generation of Toast products using Toast set of technologies, (Kotlin, DynamoDB, React, Pulsar,Camel, GraphQL, Big Data technologies, etc) Work with our Data Platform teams to develop a best in class reporting and analytics product. Document solution design, write & review code, test and rollout solutions to production, capturing & actioning customer feedback to iteratively enhance customer experience Collaborate with peers to optimize for solution design performance, flexibility and scalable, including enablement of multi-product & engineering teams on a common framework & platform Collaborate with UX, Product Management, QA and partner engineering teams to build best-in-class solutions in a complex and fast-moving environment Directly coach and mentor engineers on best in class industry standard development best practices Do you have the right ingredients*? (Requirements) 12+ years of software development experience. Experience with continuous delivery of high quality, reliable and scalable services to production Experience in AI, Cloud, Image processing and Full stack development. Proficient in database technologies such as SQL server, Postgres, or Dynamo DB. Proficient in cloud technologies such as AWS, Azure or GCP. Proficient in Java, Kotlin, C# or other object oriented language(s). Experience working in a team with Agile/Scrum methodology Experience leading the build and scale of mission critical platform components Experience of tackling complex and ambiguous problems, communicate clearly with others to solve the problem, and share knowledge to help the whole team succeed Proficient in balancing getting things done with platform stability Passionate about writing awesome code and delivering impactful scalable solutions Hands-on mentoring of other engineers. Our Spread* of Total Rewards We strive to provide competitive compensation and benefits programs that help to attract, retain, and motivate the best and brightest people in our industry. Our total rewards package goes beyond great earnings potential and provides the means to a healthy lifestyle with the flexibility to meet Toasters’ changing needs. Learn more about our benefits at https://careers.toasttab.com/toast-benefits. Our Tech Stack Toast’s products run on a stack that ranges from guest and restaurant-facing Android tablets to backend services in Java to internal, guest-facing and restaurant-facing web apps. Our backend services follow a microservice architecture written using Java 8 and DropWizard; we use AWS extensively, ranging from S3 to RDS to Lambda. We have our own platform for dealing with user management, service elevations and robust load balancing. Toast stores data in a set of sharded Postgres databases and utilizes Apache Spark for large scale data workloads including query and batch processing. The front-end is built primarily using React and ES6. The main Toast POS application is an Android application written in Java and Kotlin. For data between tablets and our cloud platform we operate RabbitMQ clusters as well as direct tablet communication to the back end. Toast uses .Net/C# and Java for the backend. The front-end is primarily written in MVC, React and Angular. We also use SQL Server/Aurora postgres for our database. Other technologies include SQS, SNS, Dynamo, SageMaker, Cloudwatch, Redshift, etc. We are Toasters Diversity, Equity, and Inclusion is Baked into our Recipe for Success. At Toast our employees are our secret ingredient. When they are powered to succeed, Toast succeeds. The restaurant industry is one of the most diverse industries. We embrace and are excited by this diversity, believing that only through authenticity, inclusivity, high standards of respect and trust, and leading with humility will we be able to achieve our goals. Baking inclusive principles into our company and diversity into our design provides equitable opportunities for all and enhances our ability to be first in class in all aspects of our industry. Ready to rise with us? Apply today! Diversity, Equity, and Inclusion is Baked into our Recipe for Success At Toast, our employees are our secret ingredient—when they thrive, we thrive. The restaurant industry is one of the most diverse, and we embrace that diversity with authenticity, inclusivity, respect, and humility. By embedding these principles into our culture and design, we create equitable opportunities for all and raise the bar in delivering exceptional experiences. We Thrive Together We embrace a hybrid work model that fosters in-person collaboration while valuing individual needs. Our goal is to build a strong culture of connection as we work together to empower the restaurant community. To learn more about how we work globally and regionally, check out: https://careers.toasttab.com/locations-toast. Apply today! Toast is committed to creating an accessible and inclusive hiring process. As part of this commitment, we strive to provide reasonable accommodations for persons with disabilities to enable them to access the hiring process. If you need an accommodation to access the job application or interview process, please contact candidateaccommodations@toasttab.com. For roles in the United States, It is unlawful in Massachusetts to require or administer a lie detector test as a condition of employment or continued employment. An employer who violates this law shall be subject to criminal penalties and civil liability. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA Cloud AI/GenAI Engineer(ServiceNow) We are seeking a talented AI/GenAI Engineer to join our team in delivering cutting-edge AI solutions to clients. The successful candidate will be responsible for implementing, developing, and deploying AI/GenAI models and solutions on cloud platforms. This role requires knowledge of ServiceNow modules like CSM and virtual agent development. Candidate should have strong technical aptitude, problem-solving skills, and the ability to work effectively with clients and internal teams. What You'll Be Doing Key Responsibilities: Cloud AI Implementation: Implement and deploy AI/GenAI models and solutions using various cloud platforms (e.g., AWS SageMaker, Azure ML, Google Vertex AI) and frameworks (e.g., TensorFlow, PyTorch, LangChain, Vellum). Build Virtual Agent in SN – Design, develop and deploy virtual agents using SN agent builder Integrate SN – Design and develop seamless integration of SN with other external AI systems Agentic AI: Assist in developing agentic AI systems on cloud platforms, enabling autonomous decision-making and action-taking capabilities in AI solutions. Cloud-Based Vector Databases: Implement cloud-native vector databases (e.g., Pinecone, Weaviate, Milvus) or cloud-managed services for efficient similarity search and retrieval in AI applications. Model Evaluation and Fine-tuning: Evaluate and optimize cloud-deployed generative models using metrics like perplexity, BLEU score, and ROUGE score, and fine-tune models using techniques like prompt engineering, instruction tuning, and transfer learning. Security for Cloud LLMs: Apply security practices for cloud-based LLMs, including data encryption, IAM policies, and network security configurations. Client Support: Support client engagements by implementing AI requirements and contributing to solution delivery. Cloud Solution Implementation: Build scalable and efficient cloud-based AI/GenAI solutions according to architectural guidelines. Cloud Model Development: Develop and fine-tune AI/GenAI models using cloud services for specific use cases, such as natural language processing, computer vision, or predictive analytics. Testing and Validation: Conduct testing and validation of cloud-deployed AI/GenAI models, including performance evaluation and bias detection. Deployment and Maintenance: Deploy AI/GenAI models in production environments, ensuring seamless integration with existing systems and infrastructure. Cloud Deployment: Deploy AI/GenAI models in cloud production environments and integrate with existing systems. Requirements: Education: Bachelor/Master's in Computer Science, AI, ML, or related fields. Experience: 3-5 years of experience in engineering solutions, with a track record of delivering Cloud AI solutions. . Should have at least 2 years’ experience of SN and SN agent builder Technical Skills: Proficiency in cloud AI/GenAI services and technologies across major cloud providers (AWS, Azure, GCP) Experience with cloud-native vector databases and managed similarity search services Experience with SN modules like CSM and virtual agent builder Experience with security measures for cloud-based LLMs, including data encryption, access controls, and compliance requirements Programming Skills: Strong programming skills in languages like Python or R Cloud Platform Knowledge: Strong understanding of cloud platforms, their AI services, and best practices for deploying ML models in the cloud Communication: Excellent communication and interpersonal skills, with the ability to work effectively with clients and internal teams. Problem-Solving: Strong problem-solving skills, with the ability to analyse complex problems and develop creative solutions. Nice to have: Experience with serverless architectures for AI workloads Nice to have: Experience with ReactJS for rapid prototyping of cloud AI solution frontends Location: Delhi or Bangalore (with remote work options) Workplace type: Hybrid Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Data Scientist (Remote – India) – Predictive Modeling & Machine Learning Location: Remote (India) Job Type: Full-time Experience: 5+ Years Job Summary: We are looking for a highly skilled Senior Data Scientist to join our India-based team in a remote capacity. This role focuses on building and deploying advanced predictive models to influence key business decisions. The ideal candidate should have strong experience in machine learning, data engineering, and working in cloud environments, particularly with AWS. You'll be collaborating closely with cross-functional teams to design, develop, and deploy cutting-edge ML models using tools like SageMaker, Bedrock, PyTorch, TensorFlow, Jupyter Notebooks, and AWS Glue. This is a fantastic opportunity to work on impactful AI/ML solutions within a dynamic and innovative team. Key Responsibilities: Predictive Modeling & Machine Learning Develop and deploy machine learning models for forecasting, optimization, and predictive analytics. Use tools such as AWS SageMaker, Bedrock, LLMs, TensorFlow, and PyTorch for model training and deployment. Perform model validation, tuning, and performance monitoring. Deliver actionable insights from complex datasets to support strategic decision-making. Data Engineering & Cloud Computing Design scalable and secure ETL pipelines using AWS Glue. Manage and optimize data infrastructure in the AWS environment. Ensure high data integrity and availability across the pipeline. Integrate AWS services to support the end-to-end machine learning lifecycle. Python Programming Write efficient, reusable Python code for data processing and model development. Work with libraries like pandas, scikit-learn, TensorFlow, and PyTorch. Maintain documentation and ensure best coding practices. Collaboration & Communication Work with engineering, analytics, and business teams to understand and solve business challenges. Present complex models and insights to both technical and non-technical stakeholders. Participate in sprint planning, stand-ups, and reviews in an Agile setup. Preferred Experience (Nice to Have): Experience with applications in the utility industry (e.g., demand forecasting, asset optimization). Exposure to Generative AI technologies. Familiarity with geospatial data and GIS tools for predictive analytics. Qualifications: Master’s in Computer Science, Statistics, Mathematics, or a related field. 5+ years of relevant experience in data science, predictive modeling, and machine learning. Experience working in cloud-based data science environments (AWS preferred). Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job description Job Title: MLOps Engineer Company: Aaizel International Technologies Pvt. Ltd. Location: On Site Experience Required: 6+ Years Employment Type: Full-Time About Aaizeltech Aaizeltech is a deep-tech company building AI/ML-powered platforms, scalable SaaS applications, and intelligent embedded systems. We are seeking a Senior MLOps Engineer to lead the architecture, deployment, automation, and scaling of infrastructure and ML systems across multiple product lines. Role Overview This role requires strong expertise and hands-on MLOps experience. You will architect and manage cloud infrastructure, CI/CD systems, Kubernetes clusters, and full ML pipelines—from data ingestion to deployment and drift monitoring. Key Responsibilities MLOps Responsibilities: Collaborate with data scientists to operationalize ML workflows. Build complete ML pipelines with Airflow, Kubeflow Pipelines, or Metaflow. Deploy models using KServe, Seldon Core, BentoML, TorchServe, or TF Serving. Package models into Docker containers using Flask or FastAPI or Django for APIs. Automated dataset versioning & model tracking via DVC and MLflow. Setup model registries and ensure reproducibility and audit trails. Implement model monitoring for: (i) Data drift and schema validation (using tools like Evidently AI, Alibi Detect). (ii) Performance metrics (accuracy, precision, recall). (iii) Infrastructure metrics (latency, throughput, memory usage). Implement event-driven retraining workflows triggered by drift alerts or data freshness. Schedule GPU workloads on Kubernetes and manage resource utilization for ML jobs. Design and manage secure, scalable infrastructure using AWS, GCP, or Azure. Build and maintain CI/CD pipelines using Jenkins, GitLab CI, GitHub Actions, or AWS DevOps. Write and manage Infrastructure as Code using Terraform, Pulumi, or CloudFormation. Automated configuration management with Ansible, Chef, or SaltStack. Manage Docker containers and advanced Kubernetes resources (Helm, StatefulSets, CRDs, DaemonSets). Implement robust monitoring and alerting stacks: Prometheus, Grafana, CloudWatch, Datadog, ELK, or Loki. Must-Have Skills Advanced expertise in Linux administration, networking, and shell scripting. Strong knowledge of Docker, Kubernetes, and container security. Hands-on experience with IaC tools like Terraform and configuration management like Ansible. Proficient in cloud-native services: IAM, EC2, EKS/GKE/AKS, S3, VPCs, Load Balancing, Secrets Manager. Mastery of CI/CD tools (e.g., Jenkins, GitLab, GitHub Actions). Familiarity with SaaS architecture, distributed systems, and multi-env deployments. Proficiency in Python for scripting and ML-related deployments. Experience integrating monitoring, alerting, and incident management workflows. Strong understanding of DevSecOps, security scans (e.g., Trivy, SonarQube, Snyk) and secrets management tools (Vault, SOPS). Experience with GPU orchestration and hybrid on-prem + cloud environments. Nice-to-Have Skills Knowledge of GitOps workflows (e.g., ArgoCD, FluxCD). Experience with Vertex AI, SageMaker Pipelines, or Triton Inference Server. Familiarity with Knative, Cloud Run, or serverless ML deployments. Exposure to cost estimation, rightsizing, and usage-based autoscaling. Understanding of ISO 27001, SOC2, or GDPR-compliant ML deployments. Knowledge of RBAC for Kubernetes and ML pipelines. Who You'll Work With AI/ML Engineers, Backend Developers, Frontend Developers, QA Team Product Owners, Project Managers, and external Government or Enterprise Clients How to Apply If you are passionate about embedded systems and excited to work on next-generation technologies, we would love to hear from you. Please send your resume and a cover letter outlining your relevant experience to hr@aaizeltech.com or bhavik@aaizeltech.com or anju@aaizeltech.com (Contact No- 7302201247) Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Job Description: We are seeking a skilled and experienced DevOps Engineer with 3+ years of hands-on expertise to join our dynamic team. The ideal candidate will be proficient in AWS services, container orchestration, CI/CD pipelines, and Infrastructure as Code (IaC). You will be responsible for designing, implementing, and maintaining scalable, secure, and efficient cloud infrastructure solutions to support development and production environments. Key Responsibilities: Design, deploy, and manage AWS infrastructure using ECS, EKS, Lambda, and other AWS services. Work with services like Bedrock and SageMaker for ML/AI infrastructure support. Implement and manage IaC using Terraform, AWS CDK, or CloudFormation. Build and maintain CI/CD pipelines using tools like Jenkins, GitHub Actions, or AWS CodePipeline. Monitor and optimize cloud infrastructure for performance, cost-efficiency, and security. Collaborate with developers to ensure seamless deployment and integration. Automate infrastructure tasks using scripting languages like Python or Bash. Troubleshoot infrastructure issues and deployment failures. Apply best practices for cloud security, governance, and observability. Requirements: Minimum 3 years of DevOps engineering experience. Strong experience with core AWS services: ECS, EKS, Lambda, EC2, VPC, API Gateway, Load Balancer, S3, CloudWatch. Practical experience with Terraform, AWS CDK, or CloudFormation. Proficient in Docker and Kubernetes for containerization and orchestration. Hands-on experience with CI/CD pipelines using Jenkins, GitHub Actions, or AWS-native tools. Familiarity with scripting for automation (Python, Bash). Good understanding of cloud security principles, monitoring, and logging. Strong troubleshooting and communication skills. Preferred Certifications (a plus): AWS Certified DevOps Engineer AWS Certified Solutions Architect AWS Certified AI Practitioner Certified Kubernetes Administrator (CKA) Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chandigarh, Chandigarh

On-site

Indeed logo

Job Title: Experienced AI Developer Location: Chandigarh Job Type: Full-Time Experience Level: Mid to Senior-Level Job Summary: As an AI Developer, you will be responsible for designing, developing, and deploying machine learning and deep learning models. You will work closely with our data science, product, and engineering teams to integrate AI capabilities into our software applications. Key Responsibilities: Design and implement AI/ML models tailored to business requirements. Train, fine-tune, and evaluate models using datasets from various domains. Integrate AI solutions into web or mobile applications. Collaborate with cross-functional teams to define AI strategies. Optimize models for performance, scalability, and reliability. Stay updated with the latest advancements in AI/ML technologies and frameworks. Deploy models to production environments using tools like Docker, Kubernetes, or cloud services (AWS/GCP/Azure). Write clean, maintainable, and well-documented code. Required Skills and Qualifications: 3+ years of experience in AI/ML development. Strong proficiency in Python and popular ML libraries (TensorFlow, PyTorch, Scikit-learn). Hands-on experience with NLP, computer vision, or recommendation systems. Experience with data preprocessing, feature engineering, and model evaluation. Familiarity with REST APIs and microservices architecture. Solid understanding of AI ethics, bias mitigation, and responsible AI development. Preferred Qualifications: Experience with large language models (e.g., GPT, LLaMA, Claude). Knowledge of AI deployment tools like MLflow, SageMaker, or Vertex AI. Experience with prompt engineering or fine-tuning foundation models. Job Type: Full-time Pay: ₹35,000.00 - ₹70,000.00 per month Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Work Location: In person

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Jaipur, Rajasthan, India

Remote

Linkedin logo

Role: Model Developer Work Location: Jaipur Tiger Analytics is a global AI and analytics consulting firm. With data and technology at the core of our solutions, our 4000+ tribe is solving problems that eventually impact the lives of millions globally. Our culture is modeled around expertise and respect with a team first mindset. Headquartered in Silicon Valley, you’ll find our delivery centers across the globe and offices in multiple cities across India, the US, UK, Canada, and Singapore, including a substantial remote global workforce. We’re Great Place to Work-Certified™. Working at Tiger Analytics, you’ll be at the heart of an AI revolution. You’ll work with teams that push the boundaries of what is possible and build solutions that energize and inspire. About BFS, and your work: Tiger’s growing Banking Financial Services vertical seeks self-motivated AI/ML/Data Science professionals with domain expertise in the Banking and Financial Services space and strong technical skills for a challenging role in the Banking & Financial Services Analytics area. Responsibilities for this job include working with various clients to design, develop, and implement various Analytics and Data Science use cases. Skill Sets needed are a mix of hands-on analytics and predictive model development/validation experience and the knowhow of the domain context in similar areas along with team leading and stakeholder management experience. The role might also come with a managerial responsibility to set and review tasks, perform QC, and to supervise mentor and coach analysts. Monitor and validate aggregate model risk in alignment with bank’s risk strategy and will lead a team of Model validators, who use their predictive and AI Modeling knowledge to review and validate a wide variety of the models. Manage a growing Model Validation team responsible for independent first line validation of predictive and generative AI models. Perform independent validations of financial, statistical, and behavioral models commensurate with their criticality ratings and assist with the validation and review of models regarding their theoretical soundness, testing design, and points of weakness Interpret data to recognize any potential risk exposure. Development of challenger models that help validate existing models and assist with outcome analysis and ensure compliance with the model risk monitoring framework. Evaluate governance for Model Risk Management by reviewing policies, controls, risk assessments, documentation standards, and validation standard. About the role: This pivotal role focuses on the end-to-end development, implementation, and ongoing monitoring of both application and behavioral scorecards within our dynamic retail banking division. While application scorecard development will be the primary area of focus and expertise required, you have a scope of contributing to behavioral scorecard initiatives. The primary emphasis will be on our unsecured lending portfolio, including personal loans, overdrafts, and particularly credit cards. You will be instrumental in enhancing credit risk management capabilities, optimizing lending decisions, and driving profitable growth by leveraging advanced analytical techniques and robust statistical models. This role requires a deep understanding of the credit lifecycle, regulatory requirements, and the ability to translate complex data insights into actionable business strategies within the Indian banking context. Key Responsibilities: End-to-End Scorecard Development (Application & Behavioral): Lead the design, development, and validation of new application scorecards and behavioral scorecards from scratch, specifically tailored for the Indian retail banking landscape and unsecured portfolios (personal loans, credit cards) across ETB and NTB Segments. Should have prior experience in this area. Utilize advanced statistical methodologies and machine learning techniques, leveraging Python for data manipulation, model building, and validation. Ensure robust model validation, back-testing, stress testing, and scenario analysis to ascertain model robustness, stability, and predictive power, adhering to RBI guidelines and internal governance. Cloud-Native Model Deployment & MLOps: Drive the deployment of developed scorecards into production environments on AWS, collaborating with engineering teams to integrate models into credit origination and decisioning systems. Implement and manage MLOps practices for continuous model monitoring, re-training, and version control within the AWS ecosystem. Data Strategy & Feature Engineering: Proactively identify, source, and analyze diverse datasets (e.g., internal bank data, credit bureau data like CIBIL, Experian, Equifax) to derive highly predictive features for scorecard development. Should have prior experience in this area. Address data quality challenges, ensuring data integrity and suitability for model inputs in an Indian banking context. Performance Monitoring & Optimization: Establish and maintain comprehensive model performance monitoring frameworks, including monthly/quarterly tracking of key performance indicators (KPIs) like Gini coefficient, KS statistic, and portfolio vintage analysis. Identify triggers for model recalibration or redevelopment based on performance degradation, regulatory changes, or evolving market dynamics. Required Qualifications, Capabilities, and Skills: Education: Bachelor's or Master's degree in a quantitative discipline such as Mathematics, Statistics, Physics, Computer Science, Financial Engineering, or a related field. Experience: 3-10 years of hands-on experience in credit risk model development, with a strong focus on application scorecard development and significant exposure to behavioral scorecards, preferably within the Indian banking sector applying concepts including roll-rate analysis, swapset analysis, reject inferencing. Demonstrated prior experience in model development and deployment in AWS environments, understanding cloud-native MLOps principles. Proven track record in building and validating statistical models (e.g., logistic regression, GBDT, random forests) for credit risk. Technical Skills: Exceptional hands-on expertise in Python (Pandas, NumPy, Scikit-learn, SciPy) for data manipulation, statistical modeling, and machine learning. Proficiency in SQL for data extraction and manipulation. Familiarity with AWS services relevant to data science and machine learning (e.g., S3, EC2, SageMaker, Lambda). Knowledge of SAS is a plus, but Python is the primary requirement. Analytical & Soft Skills: Deep understanding of the end-to-end lifecycle of application and behavioral scorecard development, from data sourcing to deployment and monitoring. Strong understanding of credit risk principles, the credit lifecycle, and regulatory frameworks pertinent to Indian banking (e.g., RBI guidelines on credit risk management, model risk management). Excellent analytical, problem-solving, and critical thinking skills. Ability to communicate complex technical concepts effectively to both technical and non-technical stakeholders Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: AI Engineer Job Type: Full-time, Contractor Location: Remote About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary Join our customer's team as an AI Engineer and play a pivotal role in shaping next-generation AI solutions. You will leverage cutting-edge technologies such as GenAI, LLMs, RAG, and LangChain to develop scalable, innovative models and systems. This is a unique opportunity for someone who is passionate about rapidly advancing their AI expertise and thrives in a collaborative, remote-first environment. Key Responsibilities Design and develop advanced AI models and algorithms using GenAI, LLMs, RAG, LangChain, LangGraph, and AI Agent frameworks. Implement, deploy, and optimize AI solutions on Amazon SageMaker. Collaborate cross-functionally to integrate AI models into existing platforms and workflows. Continuously evaluate the latest AI research and tools to ensure leading-edge technology adoption. Document processes, experiments, and model performance with clear and concise written communication. Troubleshoot, refine, and scale deployed AI solutions for efficiency and reliability. Engage proactively with the customer's team to understand business needs and deliver value-driven AI innovations. Required Skills and Qualifications Proven hands-on experience with GenAI, Large Language Models (LLMs), and Retrieval-Augmented Generation (RAG) techniques. Strong proficiency in frameworks such as LangChain, LangGraph, and building/resolving AI Agents. Demonstrated expertise in deploying and managing AI/ML solutions on AWS SageMaker. Exceptional written and verbal communication skills, with the ability to explain complex concepts to diverse audiences. Ability and eagerness to rapidly learn, adapt, and apply new AI tools and techniques as the field evolves. Background in software engineering, computer science, or a related technical discipline. Strong problem-solving skills accompanied by a collaborative and proactive mindset. Preferred Qualifications Experience working with remote or distributed teams across multiple time zones. Familiarity with prompt engineering and orchestration of complex AI agent pipelines. A portfolio of successfully deployed GenAI solutions in production environments. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Job Title: AI/ML Developer (5 Years Experience) Location : Remote Job Type : Full-time Experience:5 Year Job Summary: We are looking for an experienced AI/ML Developer with at least 5 years of hands-on experience in designing, developing, and deploying machine learning models and AI-driven solutions. The ideal candidate should have strong knowledge of machine learning algorithms, data preprocessing, model evaluation, and experience with production-level ML pipelines. Key Responsibilities Model Development : Design, develop, train, and optimize machine learning and deep learning models for classification, regression, clustering, recommendation, NLP, or computer vision tasks. Data Engineering : Work with data scientists and engineers to preprocess, clean, and transform structured and unstructured datasets. ML Pipelines : Build and maintain scalable ML pipelines using tools such as MLflow, Kubeflow, Airflow, or SageMaker. Deployment : Deploy ML models into production using REST APIs, containers (Docker), or cloud services (AWS/GCP/Azure). Monitoring and Maintenance : Monitor model performance and implement retraining pipelines or drift detection techniques. Collaboration : Work cross-functionally with data scientists, software engineers, and product managers to integrate AI capabilities into applications. Research and Innovation : Stay current with the latest advancements in AI/ML and recommend new techniques or tools where applicable. Required Skills & Qualifications Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Minimum 5 years of experience in AI/ML development. Proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or LightGBM. Strong understanding of statistics, data structures, and ML/DL algorithms. Experience with cloud platforms (AWS/GCP/Azure) and deploying ML models in production. Experience with CI/CD tools and containerization (Docker, Kubernetes). Familiarity with SQL and NoSQL databases. Excellent problem-solving and communication skills. Preferred Qualifications Experience with NLP frameworks (e.g., Hugging Face Transformers, spaCy, NLTK). Knowledge of MLOps best practices and tools. Experience with version control systems like Git. Familiarity with big data technologies (Spark, Hadoop). Contributions to open-source AI/ML projects or publications in relevant fields. Show more Show less

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About the Role We are seeking a Director of Software Engineering to lead our engineering team in Noida. This role requires a strategic and hands-on leader with deep expertise in Java and Amazon Web Services (AWS) with experience in modernizing platforms, cloud native migrations and hybrid strategies. The ideal candidate will have a strong product mindset, extensive experience in building scalable cloud-native applications, and the ability to drive engineering excellence in a fast-paced environment. Key Responsibilities • Technical Leadership: Define and implement best practices for Java-based architectures and scalable backend systems. • Team Management: Lead, mentor, and grow a high-performing team of software engineers and engineering managers. • Cloud & Infrastructure: Design, deploy, and optimize AWS-based solutions, leveraging services like EC2, Lambda, S3, RDS, DynamoDB. • Performance & Scalability: Ensure high availability, security, and performance of distributed systems on AWS and in our data centers. • APIs: Architect, design and document Restful APIs as a product for both internal and external customers • Agile Development: Foster an engineering culture of excellence with focus on product delivery with quality and technological advantage • Technology Roadmap: Stay ahead of industry trends, identifying opportunities for modernization and innovation. • Stakeholder Collaboration: Work closely with leadership, product, and operation teams to align engineering efforts with business goals. Required Qualifications • Experience: 12+ years in software engineering, with at least 5 years in a leadership role. Technical Expertise: • Strong background in Java, JDK and its ecosystem • Hands-on expertise in both data center and AWS architectures, deployments, and automation. • Strong experience with SQL/NoSQL databases (Oracle, PostgreSQL, MySQL, DynamoDB). • Proficiency in RESTful APIs, event-driven architecture (Kafka, SNS/SQS), and service design. • Strong grasp of security best practices, IAM roles, and compliance standards on AWS. • Leadership & Strategy: Proven track record of scaling engineering teams and aligning technology with business goals. • Problem-Solving Mindset: Ability to diagnose complex technical issues and optimize outcomes. Preferred Qualifications • Experience in high-scale SaaS applications using Java and AWS. • Knowledge of AI/ML services on AWS (SageMaker, Bedrock) and data engineering pipelines. • Agile & DevOps: Experience implementing DevOps pipelines, CI/CD, and Infrastructure as Code (Terraform, CloudFormation). • Background in fintech, e-commerce, or enterprise software is a plus. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Summary: We are seeking a skilled Data Scientist to join our team and help drive data-driven decision-making through the application of artificial intelligence (AI), machine learning (ML), and big data technologies. You will work with large datasets, build predictive models, and deploy solutions in cloud environments like AWS. Key Responsibilities: Develop and deploy machine learning models and AI solutions for real-world business problems. Analyze large and complex datasets using big data tools and technologies. Use Python to write clean, efficient, and scalable data science code. Design and manage data pipelines and workflows using AWS cloud services (e.g., S3, Lambda, SageMaker, Redshift). Work with relational and NoSQL databases to extract, clean, and prepare data for analysis. Collaborate with cross-functional teams including data engineers, analysts, and business stakeholders. Communicate insights and model results through visualizations and reports. Qualifications: Proficiency in Python for data science and machine learning (e.g., pandas, scikit-learn, TensorFlow, PyTorch). Experience working with large-scale datasets and big data tools (e.g., Spark, Hadoop). Hands-on experience with AWS cloud services for data storage, processing, and model deployment. Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Mohali, Punjab

On-site

Indeed logo

Company: Chicmic Studios Job Role: Python Machine Learning & AI Developer Experience Required: 3+ Years We are looking for a highly skilled and experienced Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF) Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch , TensorFlow , and scikit-learn . Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving , TorchServe , and AWS SageMaker . Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills and Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of professional experience as a Python Developer. Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch , TensorFlow , and scikit-learn . Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. Contact : 9875952836 Office Location: F273, Phase 8b Industrial Area Mohali, Punjab. Job Type: Full-time Schedule: Day shift Monday to Friday Work Location: In person

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Punjab, India

On-site

Linkedin logo

Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF). Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn. Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker. Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills And Qualifications Bachelors degree in Computer Science, Engineering. 3+ years of professional experience as a Python Developer. Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn. Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Navtech is looking for a AI/ML Engineer to join our growing data science and machine learning team. In this role, you will be responsible for building, deploying, and maintaining machine learning models and pipelines that power intelligent products and data-driven decisions. Working as an AI/ML Engineer at Navtech, you will : Design, develop, and deploy machine learning models for classification, regression, clustering, recommendations, or NLP tasks. Clean, preprocess, and analyze large datasets to extract meaningful insights and features. Work closely with data engineers to develop scalable and reliable data pipelines. Experiment with different algorithms and techniques to improve model performance. Monitor and maintain production ML models, including retraining and model drift detection. Collaborate with software engineers to integrate ML models into applications and services. Document processes, experiments, and decisions for reproducibility and transparency. Stay current with the latest research and trends in machine learning and AI. Who Are We Looking for Exactly ? 2- 4 years of hands-on experience in building and deploying ML models in real-world applications. Strong knowledge of Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or similar. Experience with data preprocessing, feature engineering, and model evaluation techniques. Solid understanding of ML concepts such as supervised and unsupervised learning, overfitting, regularization, etc. Experience working with Jupyter, pandas, NumPy, and visualization libraries like Matplotlib or Seaborn. Familiarity with version control (Git) and basic software engineering practices. You consistently demonstrate strong verbal and written communication skills as well as strong analytical and problem-solving abilities You should have a masters degree /Bachelors (BS) in computer science, Software Engineering, IT, Technology Management or related degrees and throughout education in English medium. Well REALLY Love You If You Have knowledge of cloud platforms (AWS, Azure, GCP) and ML services (SageMaker, Vertex AI, etc.) Have knowledge of GenAI prompting and hosting of LLMs. Have experience with NLP libraries (spaCy, Hugging Face Transformers, NLTK). Have familiarity with MLOps tools and practices (MLflow, DVC, Kubeflow, etc.). Have exposure to deep learning and neural network architectures. Have knowledge of REST APIs and how to serve ML models (e.g., Flask, FastAPI, Docker). Why Navtech? Performance review and Appraisal Twice a year. Competitive pay package with additional bonus & benefits. Work with US, UK & Europe based industry renowned clients for exponential technical growth. Medical Insurance cover for self & immediate family. Work with a culturally diverse team from different us : Navtech is a premier IT software and Services provider. Navtechs mission is to increase public cloud adoption and build cloud-first solutions that become trendsetting platforms of the future. We have been recognized as the Best Cloud Service Provider at GoodFirms for ensuring good results with quality services. Here, we strive to innovate and push technology and service boundaries to provide best-in-class technology solutions to our clients at scale. We deliver to our clients globally from our state-of-the-art design and development centers in the US & Hyderabad. Were a fast-growing company with clients in the United States, UK, and Europe. We are also a certified AWS partner. You will join a team of talented developers, quality engineers, product managers whose mission is to impact above 100 million people across the world with technological services by the year 2030. (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

CACI India, RMZ Nexity, Tower 30 4th Floor Survey No.83/1, Knowledge City Raidurg Village, Silpa Gram Craft Village, Madhapur, Serilingampalle (M), Hyderabad, Telangana 500081, India Req #1097 02 May 2025 CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. About Data Platform The Data Platform will be built and managed “as a Product” to support a Data Mesh organization. The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains. The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally. What does a Data Infrastructure Engineer do? A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business. The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re-useable solutions to reflect the business needs. Responsibilities Will Include Collaborating across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow Monitoring and reporting on the data platform performance, usage and security Designing and applying security and access control architectures to secure sensitive data You Will Have 3+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure, versioning, documentation and unit tests Strong proficiency in Cloud Formation, Python and SQL Knowledge and experience of relational databases such as Postgres, Redshift Experience using Git for code versioning, and lifecycle management Experience operating to Agile principles and ceremonies Hands-on experience with CI/CD tools such as GitLab Strong problem-solving skills and ability to work independently or in a team environment. Excellent communication and collaboration skills. A keen eye for detail, and a passion for accuracy and correctness in numbers Whilst not essential, the following skills would also be useful: Experience using Jira, or other agile project management and issue tracking software Experience with Snowflake Experience with Spatial Data Processing More About The Opportunity The Data Engineer is an excellent opportunity, and CACI Services India reward their staff well with a competitive salary and impressive benefits package which includes: Learning: Budget for conferences, training courses and other materials Health Benefits: Family plan with 4 children and parents covered Future You: Matched pension and health care package We understand the importance of getting to know your colleagues. Company meetings are held every quarter, and a training/work brief weekend is held once a year, amongst many other social events. CACI is an equal opportunities employer. Therefore, we embrace diversity and are committed to a working environment where no one will be treated less favourably on the grounds of their sex, race, disability, sexual orientation religion, belief or age. We have a Diversity & Inclusion Steering Group and we always welcome new people with fresh perspectives from any background to join the group An inclusive and equitable environment enables us to draw on expertise and unique experiences and bring out the best in each other. We champion diversity, inclusion and wellbeing and we are supportive of Veterans and people from a military background. We believe that by embracing diverse experiences and backgrounds, we can collaborate to create better outcomes for our people, our customers and our society. Other details Pay Type Salary Apply Now Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Join us as an MLOps Engineer at Barclays, where you will be responsible for operationalizing cutting-edge machine learning and generative AI solutions, ensuring scalable, secure, and efficient deployment across infrastructure. You will work closely with data scientists, ML engineers, and business stakeholders to build and maintain robust MLOps pipelines, enabling rapid experimentation and reliable production implementation of AI models, including LLMs and real-time analytics systems. To be successful as an MLOps Engineer you should have experience with: Strong programming skills in Python and experience with ML libraries (e.g., scikit-learn, TensorFlow, PyTorch) Experience with Jenkins, GitHub Actions, or GitLab CI/CD for automating ML pipelines Strong knowledge of Docker and Kubernetes for scalable deployments Deep experience with AWS services (e.g., SageMaker, Bedrock, Lambda, CloudFormation, Step Functions, S3 and IAM). Managing infrastructure for training and inference using AWS S3, EC2, EKS, and Step Functions. Experience with Infrastructure as Code (e.g., Terraform, AWS CDK) Familiarity with model lifecycle management tools (e.g., MLflow, SageMaker Model Registry). Strong understanding of DevOps principles applied to ML workflows. Some Other Highly Valued Skills May Include Experience with Snowflake, Databricks for collaborative ML development and scalable data processing. Knowledge of data engineering tools (e.g., Apache Airflow, Kafka, Spark). Understanding model interpretability, responsible AI, and governance. Contributions to open-source MLOps tools or communities. Strong leadership, communication, and cross-functional collaboration skills. Knowledge of data privacy, model governance, and regulatory compliance in AI systems. Exposure to LangChain, Vector DBs (e.g., FAISS, Pinecone), and retrieval-augmented generation (RAG) pipelines. You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based out of Pune. Purpose of the role To build and maintain infrastructure platforms and products that support applications and data systems, using hardware, software, networks, and cloud computing platforms as required with the aim of ensuring that the infrastructure is reliable, scalable, and secure. Ensure the reliability, availability, and scalability of the systems, platforms, and technology through the application of software engineering techniques, automation, and best practices in incident response. Accountabilities Build Engineering: Development, delivery, and maintenance of high-quality infrastructure solutions to fulfil business requirements ensuring measurable reliability, performance, availability, and ease of use. Including the identification of the appropriate technologies and solutions to meet business, optimisation, and resourcing requirements. Incident Management: Monitoring of IT infrastructure and system performance to measure, identify, address, and resolve any potential issues, vulnerabilities, or outages. Use of data to drive down mean time to resolution. Automation: Development and implementation of automated tasks and processes to improve efficiency and reduce manual intervention, utilising software scripting/coding disciplines. Security: Implementation of a secure configuration and measures to protect infrastructure against cyber-attacks, vulnerabilities, and other security threats, including protection of hardware, software, and data from unauthorised access. Teamwork: Cross-functional collaboration with product managers, architects, and other engineers to define IT Infrastructure requirements, devise solutions, and ensure seamless integration and alignment with business objectives via a data driven approach. Learning: Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth. Assistant Vice President Expectations To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda. Take ownership for managing risk and strengthening controls in relation to the work done. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience. Influence or convince stakeholders to achieve outcomes. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Back to nav Share job X(Opens in new tab or window) Facebook(Opens in new tab or window) LinkedIn(Opens in new tab or window) Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About Deepkore Deepkore lets you transform ideas into powerful enterprise applications that adapt to your business needs. One platform. No coding. Completely hassle-free. We're revolutionizing how businesses build and deploy enterprise applications through our no-code platform that combines intuitive design with AI-powered intelligence. Transform your business and boost productivity with Deepkore—effortlessly build enterprise apps that keep pace with your business's rapid growth, all without writing a single line of code. You'll join our growing engineering team as we enhance our platform with cutting-edge AI capabilities that make application building even more intelligent and intuitive for our enterprise customers. The Role Join our AI team as an AI Intern and help build the next generation of intelligent features for Deepkore's no-code enterprise platform. You'll work at the intersection of machine learning and full-stack development, creating AI solutions that make application building more intuitive and powerful for enterprise users. This isn't just about building models—you'll see your work integrated into a platform that transforms how businesses create and deploy applications. What You'll Do Core Responsibilities Build & Deploy ML Models: Develop recommendation engines, predictive analytics models, and NLP features for our core platform Data Analysis & Insights: Analyze user behavior patterns and product metrics to identify AI enhancement opportunities Full-Stack Integration: Collaborate with our MERN stack team to seamlessly integrate AI features into our web application Prototype & Experiment: Rapidly test new AI-driven features using A/B testing and user feedback Documentation & Presentation: Create technical documentation and present findings to cross-functional teams Specific Projects You Might Work On Smart App Generation: AI-powered suggestions for app structures and workflows based on business requirements Intelligent Component Recommendations: ML models that suggest optimal UI components and data flows Natural Language App Building: NLP interfaces that convert business descriptions into functional app prototypes Automated Testing & Optimization: AI systems that test and optimize app performance automatically Smart Data Integration: ML-powered connectors that intelligently map and transform enterprise data sources What We're Looking For Required Qualifications Currently pursuing B.E in Computer Science with Artificial Intelligence Strong Python Skills: Proficient with scikit-learn, pandas, numpy, matplotlib/seaborn ML Fundamentals: Understanding of supervised/unsupervised learning, model evaluation, and feature engineering Problem-Solving Mindset: Ability to break down complex problems and iterate quickly Communication Skills: Can explain technical concepts to both technical and non-technical stakeholders Highly Preferred (MERN Stack Knowledge) React: Component-based development, hooks, state management Node.js/Express: RESTful API development and server-side logic MongoDB: Database design and querying for ML applications Full-Stack Understanding: How ML models integrate with web applications Bonus Points Experience with TensorFlow, PyTorch, or Hugging Face Transformers Knowledge of cloud platforms (AWS SageMaker, GCP AI Platform, Azure ML) Familiarity with Docker, Git, and CI/CD workflows Previous internship or project experience in production ML systems Technical Skills Production ML deployment and monitoring Full-stack integration of AI features Modern MLOps practices and tools Agile development in a fast-paced environment Ready to Apply? Fill this form or attach Resume https://forms.office.com/r/bQ2YEL1wXz Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Mohali district, India

On-site

Linkedin logo

𝗔𝗯𝗼𝘂𝘁 𝘁𝗵𝗲 𝗥𝗼𝗹𝗲: We looking for a highly experienced and innovative Senior DevSecOps & Solution Architect to lead the design, implementation, and security of modern, scalable solutions across cloud platforms. The ideal candidate will bring a unique blend of DevSecOps practices, solution architecture, observability frameworks, and AI/ML expertise — with hands-on experience in data and workload migration from on-premises to cloud or cloud-to-cloud. You will play a pivotal role in transforming and securing our enterprise-grade infrastructure, automating deployments, designing intelligent systems, and implementing monitoring strategies for mission-critical applications. 𝗗𝗲𝘃𝗦𝗲𝗰𝗢𝗽𝘀 𝗟𝗲𝗮𝗱𝗲𝗿𝘀𝗵𝗶𝗽: • Own CI/CD strategy, automation pipelines, IaC (Terraform, Ansible), and container • orchestration (Docker, Kubernetes, Helm). • Champion DevSecOps best practices – embedding security into every stage of the SDLC. • Manage secrets, credentials, and secure service-to-service communication using Vault, • AWS Secrets Manager, or Azure Key Vault. • Conduct infrastructure hardening, automated compliance checks (CIS, SOC 2, ISO • 27001), and vulnerability management. • Solution Architecture: • Architect scalable, fault-tolerant, cloud-native solutions (AWS, Azure, or GCP). • Design end-to-end data flows, microservices, and serverless components. • Lead migration strategies for on-premises to cloud or cloud-to-cloud transitions, • ensuring minimal downtime and security continuity. • Create technical architecture documents, solution blueprints, BOMs, and migration • playbooks. • Observability & Monitoring: • Implement modern observability stacks: OpenTelemetry, ELK, Prometheus/Grafana, • DataDog, or New Relic. • Define golden signals (latency, errors, saturation, traffic) and enable APM, RUM, and log • aggregation. • Design SLOs/SLIs and establish proactive alerting for high-availability environments. 𝗔𝗜/𝗠𝗟 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 &𝗮𝗺𝗽; 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻: • Integrate AI/ML into existing systems for intelligent automation, data insights, and • anomaly detection. • Collaborate with data scientists to operationalize models using MLflow, SageMaker, • Azure ML, or custom pipelines. • Work with LLMs and foundational models (OpenAI, Hugging Face, Bedrock) for POCs or • production-ready features. • Migration & Transformation: • Lead complex data migration projects across heterogeneous environments — legacy • systems to cloud, or inter-cloud (e.g., AWS to Azure). • Ensure data integrity, encryption, schema mapping, and downtime minimization • throughout migration efforts. • Use tools such as AWS DMS, Azure Data Factory, GCP Transfer Services, or custom • scripts for lift-and-shift and re-architecture. 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗱 𝗦𝗸𝗶𝗹𝗹𝘀 &𝗮𝗺𝗽; 𝗤𝘂𝗮𝗹𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀: • 10+ years in DevOps, cloud architecture, or platform engineering roles. • Expert in AWS and/or Azure – including IAM, VPC, EC2, Lambda/Functions, S3/Blob, API • Gateway, and container services (EKS/AKS). • Proficient in infrastructure as code: Terraform, CloudFormation, Ansible. • Hands-on with Kubernetes (k8s), Helm, GitOps workflows. • Strong programming/scripting skills in Python, Shell, or PowerShell. • Practical knowledge of AI/ML tools, libraries (TensorFlow, PyTorch, scikit-learn), and • model lifecycle management. • Demonstrated success in large-scale migrations and hybrid architecture. • Solid understanding of application security, identity federation, and compliance. Familiar with agile practices, project estimation, and stakeholder communication. 𝗡𝗶𝗰𝗲 𝘁𝗼 𝗛𝗮𝘃𝗲: • Certifications: AWS Solutions Architect, Azure Architect, Certified Kubernetes Admin, or similar. • Experience with Kafka, RabbitMQ, event-driven architecture. • Exposure to n8n, OpenFaaS, or AI agents. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Join our Team Ericsson’s R&D Data team is seeking a motivated and self-driven Machine Learning Engineer with a strong foundation in designing, developing, and deploying machine learning models. This role sits at the intersection of data science, software engineering, and data engineering—focused on building scalable AI solutions that integrate into real-world production systems. You'll join a team of high-performing engineers building end-to-end SaaS solutions, where adaptability, a data-centric mindset, and strong technical skills are key to success. Responsibilities: Assist in designing, developing, and optimizing machine learning models for real-world applications. Contribute to the ML model lifecycle: data preprocessing, feature engineering, training, evaluation, and deployment. Work with AWS SageMaker and other cloud-based tools to train and deploy models. Collaborate with data engineers to build reliable and scalable data pipelines for ML training and inference. Work with event streaming platforms (e.g., Amazon MSK or equivalent) for real-time data ingestion and processing. Partner with senior engineers and data scientists to align ML solutions with business and product goals. Help integrate models into production environments and monitor their performance. Support automation of model retraining and deployment using MLOps tools and CI/CD pipelines. Stay up to date with the latest ML and data engineering technologies to bring best practices to the team. Requirements: 5+ years of experience in machine learning, deep learning, or AI-related fields. Bachelor's degree in Computer Science, AI, Data Science, or a related field. Proficiency in Python and experience with ML frameworks such as TensorFlow, PyTorch, or Scikit-learn. Hands-on experience with cloud services (e.g., AWS SageMaker, S3, Lambda) is a plus. Familiarity with ML model architectures (e.g., CNNs, RNNs, Transformers) and training techniques. Exposure to MLOps practices (e.g., CI/CD, monitoring, model versioning). Experience working with real-time data streaming platforms (e.g., Amazon MSK, or similar). Familiarity with data engineering tools and practices (e.g., building ETL pipelines, using Spark, Airflow, or similar). Strong analytical thinking and a collaborative mindset with a desire to grow technically. Show more Show less

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Lead Consultant - Data Engineering Career Level - E Introduction to role Join our Operations IT team, a global IT capability supporting the Global Operations organization. We partner with various Operations capability areas such as Pharmaceutical Technology Development, Manufacturing & Global Engineering, Quality Control, Sustainability, Supply Chain, Logistics & Global External Sourcing & Procurement. Our work directly impacts patients by redefining our ability to develop life-changing medicines. We are passionate about impacting lives through data, analytics, AI, machine learning, and more. As part of the Data Analytics and AI (DA&AI) organization within Operations IT, you will deliver brand-new Data Analytics and AI solutions for various Operations capability areas. Our work transforms our ability to develop life-changing medicines, empowering the business to perform at its peak. We combine powerful science with leading digital technology platforms and data. As the Technical Lead, you will drive the successful technical delivery of products/projects aligned with business goals. You will use deep technical expertise, lead and coordinate across all technical workstreams (Data & Cloud Engineering, Software Engineering, AI, and other relevant tech stacks), and work closely with relevant team members both internally and externally. Accountabilities Work closely with product managers, architects, and delivery managers to provide technical knowledge and support the team. Own the technical vision of the product and collaborate with multi-functional teams to define designs that meet product objectives and success criteria. Compile and maintain detailed technical designs, refine user Epics/user stories, and ensure adherence to timelines and resource allocations. Identify technical risks and ensure visibility and progress toward mitigating these risks. Lead, direct, and mentor technical teams while fostering a culture of collaboration, innovation, and accountability. Define and communicate governance frameworks specific to product development. Deliver components of the Service Acceptance Criteria as part of the Service Introduction deliverables. Ensure alignment to regulatory requirements, data security, and industry standards throughout the development lifecycle. Lead DevOps and DataOps teams to ensure delivery to coding standard processes, security, and performance requirements. Ensure data pipelines and solutions carry out to the FAIR principles to enhance data usability and sharing. Essential Skills/Experience Minimum 10+ years of experience in developing and delivering software engineering and data engineering solutions Deep technical expertise in Data Engineering, Software Engineering, Cloud Engineering, and good understanding in AI Engineering Strong understanding of DevOps and DataOps ways of working Proven expertise in product development and/or product management Offer technical thought leadership for Data and Analytics and AI products Effective communication, partner management, problem-solving skills, and team collaboration Hands-on experience working in end-to-end product development with an innovation mindset Knowledge of Data Mesh and Data Product concepts Experienced in Agile WoW Collaborative approach to engineering Data Engineering & ETL: Design, implement, and optimize data pipelines using industry-leading ETL tools Cloud & DevOps: Architect and handle scalable, secure cloud environments using AWS Compute Services Scheduling & Orchestration: Lead the orchestration of sophisticated workflows with Apache Airflow DataOps & Automation: Champion the adoption of DataOps principles using tools like DataOps.Live Data Storage & Management: Oversee the design and management of data storage systems including Snowflake Business Intelligence & Reporting: Own the development of actionable insights using Power BI Full-Stack Software Development: Build and maintain end-to-end software applications using NodeJS for backend development AI & Generative AI Services: Implement and handle AI/ML models using Amazon SageMaker Proficient in multiple coding languages such as Python Knowledge of database technologies both SQL and NoSQL Familiarity with agile methodologies Previous involvement in a large multinational company or pharmaceutical environment Strong leadership and mentoring skills Desirable Skills/Experience Bachelor's or master's degree in a relevant field such as Health Sciences, Life Sciences, Data Management, Information Technology or equivalent experience. Experience working in the pharmaceuticals industry Certification in AWS Cloud or any data engineering or software engineering-related certification When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and daring world. At AstraZeneca, we are committed to making a difference beyond patients by pioneering our sustainability strategy. You will have the opportunity to be a key contributor to Zero Carbon by 2025 and carbon negative across the entire value chain by 2030. Everyone can chip in towards our collective legacy of doing good for people, the environment, and society. Ready to make a significant impact? Apply now! Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: MLOps Engineer (Remote) Experience: 5+ Years Location: Remote Type: Full-time About the Role: We are seeking an experienced MLOps Engineer to design, implement, and maintain scalable machine learning infrastructure and deployment pipelines. You will work closely with Data Scientists and DevOps teams to operationalize ML models, optimize performance, and ensure seamless CI/CD workflows in cloud environments (Azure ML/AWS/GCP). Key Responsibilities: ✔ ML Model Deployment: Containerize ML models using Docker and deploy on Kubernetes Build end-to-end ML deployment pipelines for TensorFlow/PyTorch models Integrate with Azure ML (or AWS SageMaker/GCP Vertex AI) ✔ CI/CD & Automation: Implement GitLab CI/CD pipelines for automated testing and deployment Manage version control using Git and enforce best practices ✔ Monitoring & Performance: Set up Prometheus + Grafana dashboards for model performance tracking Configure alerting systems for model drift, latency, and errors Optimize infrastructure for scalability and cost-efficiency ✔ Collaboration: Work with Data Scientists to productionize prototypes Document architecture and mentor junior engineers Skills & Qualifications: Must-Have: 5+ years in MLOps/DevOps, with 6+ years total experience Expertise in Docker, Kubernetes, CI/CD (GitLab CI/CD), Linux Strong Python scripting for automation (PySpark a plus) Hands-on with Azure ML (or AWS/GCP) for model deployment Experience with ML model monitoring (Prometheus, Grafana, ELK Stack) Nice-to-Have: Knowledge of MLflow, Kubeflow, or TF Serving Familiarity with NVIDIA Triton Inference Server Understanding of data pipelines (Airflow, Kafka) Why Join Us? 💻 100% Remote with flexible hours 🚀 Work on cutting-edge ML systems at scale 📈 Competitive salary + growth opportunities Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Technical Architect- Python & AI/ML: About Us: Headquartered in Sunnyvale, with offices in Dallas & Hyderabad, Fission Labs is a leading software development company, specializing in crafting flexible, agile, and scalable solutions that propel businesses forward. With a comprehensive range of services, including product development, cloud engineering, big data analytics, QA, DevOps consulting, and AI/ML solutions, we empower clients to achieve sustainable digital transformation that aligns seamlessly with their business goals. Key Responsibilities: · Design and architect complex Generative AI solutions using AWS technologies · Develop advanced AI architectures incorporating state-of-the-art GenAI technologies · Create and implement Retrieval Augmented Generation (RAG) and GraphRAG solutions · Architect scalable AI systems using AWS Bedrock and SageMaker · Design and implement agentic AI systems with advanced reasoning capabilities · Develop custom AI solutions leveraging vector databases and advanced machine learning techniques · Evaluate and integrate emerging GenAI technologies and methodologies Private and Confidential. Technical Expertise Requirements Generative AI Technologies · Expert-level understanding of: o Retrieval Augmented Generation (RAG) o GraphRAG methodologies o LoRA (Low-Rank Adaptation) techniques o Vector Database architectures o Agentic AI design principles AWS AI Services · Comprehensive expertise in: o AWS Bedrock o Amazon SageMaker o AWS AI/ML services ecosystem o Cloud-native AI solution design Technical Skills · Advanced Python programming for AI/ML applications · Deep understanding of: o Large Language Models (LLMs) o Machine Learning architectures o AI model fine-tuning techniques o Prompt engineering o AI system design and integration Core Competencies · Advanced AI solution architecture · Machine learning model optimization · Cloud-native AI system design · Performance tuning of GenAI solutions · Enterprise AI strategy development Technical Stack · Programming Languages: Python (required) · Cloud Platform: AWS · AI Technologies: o Bedrock o SageMaker o Vector Databases · Machine Learning Frameworks: o PyTorch o TensorFlow o Hugging Face · AI Integration Tools: o LangChain o LlamaIndex We Offer: ●Opportunity to work on business challengesfrom top global clientele with high impact. Vast opportunities for self-development, including online university access and sponsored certifications. Sponsored Tech Talks, industry events & seminars to foster innovationand learning. Generous benefits package including health insurance, retirement benefits, flexible work hours, and more. Supportive work environment with forums to explore passionsbeyond work. This role presents an exciting opportunity for a motivated individual to contribute to the development of cutting-edge solutions while advancing their career in a dynamicand collaborative environment. Show more Show less

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job description Job role - AI/ML Technical Architect Experience – More that 12 years of experience Location – Noida Mode of Work - Hybrid Shift - 2PM to 10 PM Key Responsibilities: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs. Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript About Company: Pattem Group is a conglomerate holding company headquartered in Bangalore, India. Our companies under the umbrella of Pattem Group. We represent the essence of software product development, catering to global Fortune 500 companies and innovative startups. We are seeking an HR Executive with hands-on experience and a strong focus on execution to handle critical HR functions. The role requires practical involvement in various HR areas, ensuring the smooth and effective operation of HR practices across the organization. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

About the company MindBrain is a dynamic software company that integrates innovation, education, and strategic workforce solutions. As pioneers in cutting-edge technologies, we shape the future of digital transformation. Our commitment to education empowers individuals with the skills needed to thrive in a rapidly evolving landscape. Additionally, we connect businesses with the right talent at the right time, driving success through impactful collaborations. About the role MindBrain is seeking a highly experienced Senior Python Developer for a remote, contract-based position. This role is ideal for professionals passionate about AI/ML, cloud technologies, and building scalable backend solutions. You will be responsible for leading back-end development, contributing to data science initiatives, and architecting robust AI-driven systems. Responsibilities Lead back-end web development using Python and object-oriented programming principles Develop and manage robust and scalable applications and services Utilize both SQL and NoSQL databases effectively Work with data science libraries such as Pandas, Scikit-learn, TensorFlow, and PyTorch Leverage cloud-based AI/ML platforms including AWS (SageMaker, Bedrock), GCP (Vertex AI, Gemini), Azure AI Studio, and OpenAI Deploy and manage ML models in cloud environments Fine-tune models using a variety of AI/ML techniques and tools Build and manage agentic AI workflows Stay current with evolving trends in AI/ML and incorporate the latest advancements into development strategies Architect intelligent systems that utilize statistical modeling, machine learning, and deep learning for business forecasting and optimization Qualifications Fluency in Python and deep experience with SQL and NoSQL databases Proficiency with data science and deep learning libraries: Pandas, Scikit-learn, PyTorch, TensorFlow Experience deploying models on platforms like AWS SageMaker, GCP Vertex AI, Azure AI Studio, and OpenAI Hands-on experience in fine-tuning AI models and building agentic AI workflows Ability to architect scalable AI solutions based on business requirements Strong understanding of statistical modeling, machine learning, and deep learning concepts Self-motivated with a strong desire to stay current with advancements in the AI domain Excellent problem-solving, communication, and collaboration skills Equal opportunity MindBrain is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all team members. Show more Show less

Posted 2 weeks ago

Apply

Exploring Sagemaker Jobs in India

Sagemaker is a rapidly growing field in India, with many companies looking to hire professionals with expertise in this area. Whether you are a seasoned professional or a newcomer to the tech industry, there are plenty of opportunities waiting for you in the sagemaker job market.

Top Hiring Locations in India

If you are looking to land a sagemaker job in India, here are the top 5 cities where companies are actively hiring for roles in this field:

  • Bangalore
  • Hyderabad
  • Pune
  • Mumbai
  • Chennai

Average Salary Range

The salary range for sagemaker professionals in India can vary based on experience and location. On average, entry-level professionals can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

In the sagemaker field, a typical career progression may look like this:

  • Junior Sagemaker Developer
  • Sagemaker Developer
  • Senior Sagemaker Developer
  • Sagemaker Tech Lead

Related Skills

In addition to expertise in sagemaker, professionals in this field are often expected to have knowledge of the following skills:

  • Machine Learning
  • Data Science
  • Python programming
  • Cloud computing (AWS)
  • Deep learning

Interview Questions

Here are 25 interview questions that you may encounter when applying for sagemaker roles, categorized by difficulty level:

  • Basic:
  • What is Amazon SageMaker?
  • How does SageMaker differ from traditional machine learning?
  • What is a SageMaker notebook instance?

  • Medium:

  • How do you deploy a model in SageMaker?
  • Can you explain the process of hyperparameter tuning in SageMaker?
  • What is the difference between SageMaker Ground Truth and SageMaker Processing?

  • Advanced:

  • How would you handle model drift in a SageMaker deployment?
  • Can you compare SageMaker with other machine learning platforms in terms of scalability and flexibility?
  • How do you optimize a SageMaker model for cost efficiency?

Closing Remark

As you explore opportunities in the sagemaker job market in India, remember to hone your skills, stay updated with industry trends, and approach interviews with confidence. With the right preparation and mindset, you can land your dream job in this exciting and evolving field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies