Jobs
Interviews

2538 Sagemaker Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

10 Lacs

india

On-site

MI Teams – Sr. AI Engineer Education: BE / BTech or ME in Information Technology / Computer Science Engineering . Certifications in Machine Learning / Artificial Intelligence preferred. Experience & Skills Required (3–5 Years): Proven expertise in AI/ML model development and deployment (Regression, Classification, Neural Networks). Hands-on experience with Deep Learning (DL) , Natural Language Processing (NLP) , and LLMs (GPT, LLaMA, HuggingFace models, etc.). End-to-end experience in AI/ML projects : data preprocessing, model training, testing, and cloud deployment. Strong programming in Python, Java, C, C++ . Familiarity with DevOps/Agile methodologies, Azure DevOps, Jira, Git . Cloud experience with AWS, Azure, or GCP (SageMaker, Vertex AI Studio, Azure AI/ML services). Knowledge of API development, integration & security . Strong communication and prior client-facing project leadership . Additional Capability Requirements within AI Teams (Preferred): Software Engineering (C, C++, GitHub – mandatory). UI & Frontend Development (modern frameworks). Backend Engineering with DevSecOps . DBMS and Data Encryption expertise. System Architecture design & deployment. Experience with Docker & Kubernetes . Linux Administration . Networking – CCNA certified. AI/ML Specialists – Python, TensorFlow, R. Vision Systems – image/video data handling, compression & decompression. Job Types: Full-time, Permanent Pay: Up to ₹1,000,000.00 per year Benefits: Health insurance Provident Fund Work Location: In person

Posted 21 hours ago

Apply

4.0 years

0 Lacs

mumbai, maharashtra, india

On-site

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Sales Job Details About Salesforce Salesforce is the #1 AI CRM, where humans with agents drive customer success together. Here, ambition meets action. Tech meets trust. And innovation isn’t a buzzword — it’s a way of life. The world of work as we know it is changing and we're looking for Trailblazers who are passionate about bettering business and the world through AI, driving innovation, and keeping Salesforce's core values at the heart of it all. Ready to level-up your career at the company leading workforce transformation in the agentic era? You’re in the right place! Agentforce is the future of AI, and you are the future of Salesforce. Overview Of The Role We have an outstanding opportunity for an expert AI and Data Cloud Solutions Engineer to work with our trailblazing customers in crafting ground-breaking customer engagement roadmaps demonstrating the Salesforce applications, platform across the machine learning and LLM/GPT domains in India! The successful applicant will have a track record in driving business outcomes through technology solutions, with experience in engaging at the C-level with Business and Technology groups. Responsibilities Primary pre-sales technical authority for all aspects of AI usage within the Salesforce product portfolio - existing Einstein ML based capabilities and new (2023) generative AI Majority of time (60%+) will be customer/external facing Evangelisation of Salesforce AI capabilities Assessing customer requirements and use cases and aligning to these capabilities Solution proposals, working with Architects and wider Solution Engineer (SE) teams Building reference models/ideas/approaches for inclusion of GPT based products within wider Salesforce solution architectures, especially involving Data Cloud Alignment with customer security and privacy teams on trust capabilities and values of our solution(s) Presenting at multiple customer events from single account sessions through to major strategic events (World Tour, Dreamforce) Representing Salesforce at other events (subject to PM approval) Sales and SE organisation education and enablement e.g. roadmap - all roles across all product areas Bridge/primary contact point to product management Provide thought leadership in how large enterprise organisation can drive customer success through digital transformation. Ability to uncover the challenges and issues a business is facing by running successful and targeted discovery sessions and workshops. Be an innovator who can build new solutions using out-of-the-box thinking. Demonstrate business value of our AI solutions to business using solution presentations, demonstrations and prototypes. Build roadmaps that clearly articulate how partners can implement and accept solutions to move from current to future state. Deliver functional and technical responses to RFPs/RFIs. Work as an excellent teammate by chipping in, learning and sharing new knowledge. Demonstrate a conceptual knowledge of how to integrate cloud applications to existing business applications and technology. Lead multiple customer engagements concurrently. Be self-motivated, flexible, and take initiative.| Required Qualifications Experience will be evaluated based on the core proficiencies of the role. 4+ years working directly in the commercial technology space with AI products and solutions. Data knowledge - Data science, Data lakes and warehouses, ETL, ELT, data quality AI knowledge - application of algorithms and models to solve business problems (ML, LLMs, GPT) 10+ years working in a sales, pre-sales, consulting or related function in a commercial software company Strong focus and experience in pre-sales or implementation is required. Experience in demonstrating Customer engagement solution, understand and drive use cases, customer journeys, ability to draw ‘Day in life of’ across different LOBs. Business Analysis/ Business case/return on investment construction. Demonstrable experience in presenting and communicating complex concepts to large audiences A broad understanding of and ability to articulate the benefits of CRM, Sales, Service and Marketing cloud offerings Strong verbal and written communications skills with a focus on needs analysis, positioning, business justification, and closing techniques. Continuous learning demeanor with a demonstrated history of self enablement and advancement in both technology and behavioural areas. Preferred Qualifications: expertise in an AI related subject (ML, deep learning, NLP etc.) Familiar with technologies such as OpenAI, Google Vertex, Amazon Sagemaker, Snowflake, Databricks etc Unleash Your Potential When you join Salesforce, you’ll be limitless in all areas of your life. Our benefits and resources support you to find balance and be your best , and our AI agents accelerate your impact so you can do your best . Together, we’ll bring the power of Agentforce to organizations of all sizes and deliver amazing experiences that customers love. Apply today to not only shape the future — but to redefine what’s possible — for yourself, for AI, and the world. Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 23 hours ago

Apply

5.0 years

0 Lacs

hyderabad, telangana, india

On-site

hackajob is collaborating with J.P. Morgan to connect them with exceptional tech professionals for this role. We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible. As a Lead Software Engineer at JPMorganChase within the Employee Platforms team, you serve as a seasoned member of an Incubation and Research team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. This role requires a unique ability to apply state-of-the-art technical skills, work with ambiguity, and deliver projects to completion. You will be instrumental in developing innovative solutions, finding market fit, and delivering products that resonate with our users. As a hands-on engineer, you will bring in cutting-edge technologies, including Generative AI and ML. Job Responsibilities Execute software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. Embrace ambiguity and lead the development of innovative solutions without a predefined roadmap. Implement the Build-Measure-Lean loop by rapidly prototyping, testing, and iterating on engineering solutions. Analyze user feedback and technical challenges to refine product offerings and ensure alignment with user needs. Create secure and high-quality production code and maintain algorithms that run synchronously with appropriate systems. Produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development. Gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Proactively identify hidden problems and patterns in data and use these insights to drive improvements to coding hygiene and system architecture. Contribute to software engineering communities of practice and events that explore new and emerging technologies. Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 5+ years applied experience Hands-on practical experience in Python, SQL, advanced GenAI technologies like multimodality (Voice & Images), Agentic AI and ML technologies Highly proficient in coding in one or more languages such as Python, SQL, Java and R programming languages Experience with one or more platform tech stacks such as AWS, Docker, Kubernetes, Data bricks and CI/CD pipelines. Solid understanding of using ML techniques specially in Natural Language Processing (NLP), Knowledge Graph and Large Language Models (LLMs) Experience in advanced applied ML areas such as GPU optimization, finetuning, embedding models, inferencing, prompt engineering, evaluation, RAG (Similarity Search) Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Overall knowledge of the Software Development Life Cycle Solid understanding of agile methodologies such as CI/CD, application resiliency, and security Preferred Qualifications, Capabilities, And Skills Proficiency in optimizing and tuning AI models to ensure efficient, scalable solutions, with experience in building and deploying ML models on cloud platforms such as AWS and using tools like Sagemaker and EKS. Knowledge of data engineering practices to support AI model training and deployment, along with a strong understanding of machine learning algorithms and techniques—including supervised, unsupervised, and reinforcement learning—and hands-on experience with libraries such as TensorFlow, PyTorch, Scikit-learn, and Keras. Skills in collaborating with cross-functional teams to integrate generative AI solutions into broader business processes and applications, leveraging advanced LLM techniques such as Agents, Planning, and Reasoning. In-depth understanding of embedding-based search/ranking, recommender systems, graph techniques, and other advanced methodologies to enhance AI solution capabilities.

Posted 1 day ago

Apply

4.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Title: Data Scientist – Agentic AI & MLOps Location: Bangalore - Hybrid (3 days work from office, 2 days from home) About Us: Our client delivers next-generation security analytics and operations management. They secure organisations worldwide by staying ahead of cyber threats, leveraging AI-reinforced capabilities for unparalleled protection. Job Overview: We’re seeking a Senior Data Scientist to architect agentic AI solutions and own the full ML lifecycle—from proof-of-concept to production. You’ll operationalise LLMs, build agentic workflows, implement MLOps best practices, and design multi-agent systems for cybersecurity tasks. Key Responsibilities: Operationalise large language models and agentic workflows (LangChain, LangGraph, LlamaIndex, Crew.AI) to automate security decision-making and threat response. Design, deploy, and maintain multi-agent AI systems for log analysis, anomaly detection, and incident response. Build proof-of-concept GenAI solutions and evolve them into production-ready components on AWS (Bedrock, SageMaker, Lambda, EKS/ECS) using reusable best practices. Implement CI/CD pipelines for model training, validation, and deployment with GitHub Actions, Jenkins, and AWS CodePipeline. Manage model versioning with MLflow and DVC, set up automated testing, rollback procedures, and retraining workflows. Automate cloud infrastructure provisioning with Terraform and develop REST APIs and microservices containerised with Docker and Kubernetes. Monitor models and infrastructure through CloudWatch, Prometheus, and Grafana; analyse performance and optimise for cost and SLA compliance. Collaborate with data scientists, application developers, and security analysts to integrate agentic AI into existing security workflows. Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, AI or related quantitative discipline. 4+ years of software development experience, including 3+ years building and deploying LLM-based/agentic AI architectures. In-depth knowledge of generative AI fundamentals (LLMs, embeddings, vector databases, prompt engineering, RAG). Hands-on experience with LangChain, LangGraph, LlamaIndex, Crew.AI or equivalent agentic frameworks. Strong proficiency in Python and production-grade coding for data pipelines and AI workflows. Deep MLOps knowledge: CI/CD for ML, model monitoring, automated retraining, and production-quality best practices. Extensive AWS experience with Bedrock, SageMaker, Lambda, EKS/ECS, S3 (Athena, Glue, Snowflake preferred). Infrastructure as Code skills with Terraform. Experience building REST APIs, microservices, and containerization with Docker and Kubernetes. Solid data science fundamentals: feature engineering, model evaluation, data ingestion. Understanding of cybersecurity principles, SIEM data, and incident response. Excellent communication skills for both technical and non-technical audiences. Preferred Qualifications: AWS certifications (Solutions Architect, Developer Associate). Experience with Model Context Protocol (MCP) and RAG integrations. Familiarity with workflow orchestration tools (Apache Airflow). Experience with time series analysis, anomaly detection, and machine learning.

Posted 1 day ago

Apply

3.0 years

0 Lacs

hyderabad, telangana, india

On-site

Job Description We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Employee Platforms team, you will be a seasoned member of an agile team, tasked with designing and delivering trusted, market-leading technology products that are secure, stable, and scalable. Your role involves implementing critical technology solutions across multiple technical domains, supporting various business functions to Job Responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, opportunity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in Python, SQL, advanced GenAI technologies like multimodality (Voice & Images), Agentic AI and ML technologies Highly proficient in coding in one or more languages such as Python, SQL, Java and R programming languages Experience with one or more platform tech stacks such as AWS, Docker, Kubernetes, Data bricks and CI/CD pipelines. Solid understanding of using ML techniques specially in Natural Language Processing (NLP), Knowledge Graph and Large Language Models (LLMs) Experience in advanced applied ML areas such as GPU optimization, finetuning, embedding models, inferencing, prompt engineering, evaluation, RAG (Similarity Search) Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Overall knowledge of the Software Development Life Cycle Solid understanding of agile methodologies such as CI/CD, application resiliency, and security Preferred Qualifications, Capabilities, And Skills Proficiency in optimizing and tuning AI models to ensure efficient, scalable solutions, with experience in building and deploying ML models on cloud platforms such as AWS and using tools like Sagemaker and EKS. Knowledge of data engineering practices to support AI model training and deployment, along with a strong understanding of machine learning algorithms and techniques—including supervised, unsupervised, and reinforcement learning—and hands-on experience with libraries such as TensorFlow, PyTorch, Scikit-learn, and Keras. Skills in collaborating with cross-functional teams to integrate generative AI solutions into broader business processes and applications, leveraging advanced LLM techniques such as Agents, Planning, and Reasoning. In-depth understanding of embedding-based search/ranking, recommender systems, graph techniques, and other advanced methodologies to enhance AI solution capabilities.

Posted 1 day ago

Apply

7.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Dear Aspirant! We empower our people to stay resilient and relevant in a constantly changing world. We’re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like you? Then it seems like you’d make a great addition to our vibrant international team. We are looking for: Associate Software Architect – AI Solutions , You’ll make an impact by: Collaborate with AI/ML engineers, developers, and platform architects to define end-to-end solution architectures for AI-powered systems Design and review architectures that support LLM-based, Deep Learning, and traditional ML workloads with an emphasis on modularity, performance, and maintainability. Drive architectural alignment across components such as model serving, orchestration layers, data pipelines, and cloud infrastructure. Incorporate Non-Functional Requirements (NFRs) including scalability, testability, observability, latency, and fault tolerance into system designs. Define architecture patterns for deploying models as microservices, integrating with APIs, and ensuring CI/CD readiness. Evaluate and recommend suitable cloud services and managed tools from AWS (Sagemaker, Bedrock) and Azure (ML Studio, OpenAI, AI Foundry). Ensure high standards of code quality, performance benchmarking, and architectural documentation. Collaborate in technical reviews and mentor engineering teams on architectural best practices and trade-offs. Use your skills to move the world forward! Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. 7+ years of experience in software/system architecture, including at least 4 years working with AI/ML-based solutions. Strong understanding of AI system lifecycle, including model development, serving, monitoring, and retraining. Experience designing scalable systems with Python, FastAPI, containerization (Docker), and orchestration (Kubernetes or ECS). Proficiency in defining and measuring NFRs, conducting architectural performance reviews, and supporting DevOps and MLOps practices. Solid hands-on experience with cloud-native AI development using AWS or Azure tools and services Experience working in AI/ML-driven environments, especially with LLMs, RAG architectures, and agent-based systems. Familiarity with model observability, logging frameworks, and telemetry strategies for AI applications. Prior exposure to architecture in the Power/Energy or Electrification domain is a strong plus. Good understanding of hybrid system architectures that integrate real-time and batch pipelines. Strong collaboration and documentation skills with the ability to align cross-functional technical teams. Create a better #TomorrowWithUs! This role is based in Bangalore, where you’ll get the chance to work with teams impacting entire cities, countries - and the shape of things to come. We’re Siemens. A collection of over 312,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we encourage applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and imagination and help us shape tomorrow. Find out more about Siemens careers at: www.siemens.com/careers Find out more about the Digital world of Siemens here: www.siemens.com/careers/digitalminds

Posted 1 day ago

Apply

125.0 years

0 Lacs

bengaluru, karnataka, india

On-site

FC Global Services India LLP (First Citizens India), a part of First Citizens BancShares, Inc., a top 20 U.S. financial institution, is a global capability center (GCC) based in Bengaluru. Our India-based teams benefit from the company’s over 125-year legacy of strength and stability. First Citizens India is responsible for delivering value and managing risks for our lines of business. We are particularly proud of our strong, relationship-driven culture and our long-term approach, which are deeply ingrained in our talented workforce. This is evident across all key areas of our operations, including Technology, Enterprise Operations, Finance, Cybersecurity, Risk Management, and Credit Administration. We are seeking talented individuals to join us in our mission of providing solutions fit for our clients’ greatest ambitions. Job Description Value Preposition Responsible for developing and maintaining large scale data platforms: building data pipelines to share enterprise data, designing and building cloud solutions with appropriate data access, data security, data privacy and data governance Demonstrate deep technical knowledge and expertise in software development, data engineering frameworks and best practices. Use agile engineering practices and various data development technologies to rapidly develop creative and efficient data products Job Details Position Title: Principal Data Engineer Career Level: P4 Job Category: Assistant Vice President Role Type: Hybrid Job Location: Bangalore About The Team The data engineering team is community of dedicated professionals committed to designing, building, and maintaining data platform solutions for the organization. Impact (Job Summary/Why This Role Matters) Enterprise data warehouse supports several critical business functions for the bank including Regulatory Reporting, Finance, Risk steering, and Customer 360. This role is vital for building and maintaining enterprise data platform, data processes, and to support business objectives. Our values — inclusivity, transparency, and excellence — drive everything we do. Join us and make a meaningful impact on the organization. Key Deliverables (Duties and Responsibilities) As a Principal Data Engineer, you will be responsible for building and maintaining large scale data platforms: enriching data pipelines to share enterprise data, designing, building, and maintaining a data platform such as Enterprise Data Warehouse, Operational Data Store or Data Marts etc. with appropriate data access, data security, data privacy and data governance. Demonstrate technical knowledge and leadership in software development, data engineering frameworks and best practices. Collaborate with the Data Architects, Solution Architects & Data Modelers to enhance the Data platform design, constantly identify a backlog of tech debts in line with identified upgrades and provide technical solutions & implement the same. Participates on the Change Advisory Board (CAB) and ensures effective change control is implemented for all infrastructure and/or application installations, rollbacks, and updates. Collaborate with IT and CSO teams to ensure compliance with data governance, privacy and security policies and regulations. Manage deliverables of developers, perform design reviews and coordinate release management activities. Drive automation, identify inefficiencies, optimize processes and data flows, and recommend improvements. Use agile engineering practices and various data development technologies to rapidly develop and implement efficient data products. Work with global technology teams across different time zones (primarily US) to deliver timely business value. Skills and Qualification (Functional and Technical Skills) Functional Skills Business/Domain Knowledge: Good understanding of application systems and business domains Partnership and Collaboration: Develop and maintain partnership with business and IT stakeholders Communication: Excellent verbal, written, and interpersonal communication skills. Problem Solving: Excellent problem-solving skills, incident management, root cause analysis, and proactive solutions to improve quality. Team Player: Support peers, team, and department management. Attention to Detail: Ensure accuracy and thoroughness in all tasks. Technical/Business Skills Data Engineering: Experience in designing and building Data Warehouse and Data lakes. Extensive knowledge of data warehouse principles, design, and concepts. Technical expertise working in large scale Data Warehousing applications and databases such as Oracle, Netezza, Teradata, and SQL Server. Deep technical knowledge in data engineering frameworks and best practices. Experience with public cloud-based data platforms especially Snowflake, AWS, and machine learning capabilities such as Sagemaker, DataRobot. Data integration skills: Expertise in creating and maintaining ETL processes and architecting complex data pipelines - knowledge of data modeling techniques and high-volume ETL/ELT design. Solutions using any industry leading ETL tools such as SAP Business Objects Data Services (BODS), Informatica Cloud Data Integration Services (IICS), IBM Data Stage. Knowledge of ELT tools such as DBT, Fivetran, and AWS Glue Data Model: Expert knowledge of Logical and Physical Data Model using Relational or Dimensional Modeling practices, and high-volume ETL/ELT design. Expert in SQL - development experience in at least one scripting language (Python etc.), adept in tracing and resolving data integrity issues. Knowledge of Data Visualization using Power BI or Tableau Performance tuning of data pipelines and DB Objects to deliver optimal performance. Excellent data analysis skills using SQL and experience in incident management techniques. Data protection/compliance standards like GDPR, CCPA, HIPAA Experience working in Financial Industry is a plus Leadership Qualities (For People Leaders) Communication: Clearly conveys ideas and listens actively. Inspiration: Motivates and encourages the team to achieve their best. Influence: Extensive stakeholder management experience and ability to influence people Driving strategic and technical initiatives Relationships & Collaboration Reports to: Associate Director - Data Engineering Partners: Senior leaders and cross-functional teams Leads: A team of Data Engineering associates Accessibility Needs We are committed to providing an inclusive and accessible hiring process. If you require accommodations at any stage (e.g. application, interviews, onboarding) please let us know, and we will work with you to ensure a seamless experience. Equal Employment Opportunity FC Global Services India LLP (First Citizens India) is an Equal Employment Opportunity Employer. We are committed to fostering an inclusive and accessible environment and prohibit all forms of discrimination on the basis of gender, religion, caste, disability, sexual orientation, economic status or any other characteristics protected by the law. We strive to foster a safe and respectful environment in which all individuals are treated with respect and dignity. Our EEO policy ensures fairness throughout the employee life cycle.

Posted 1 day ago

Apply

2.0 - 4.0 years

0 Lacs

bengaluru, karnataka, india

On-site

At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. At PwC, we connect people with diverse backgrounds and skill sets to solve important problems together and lead with purpose—for our clients, our communities and for the world at large. It is no surprise therefore that 429 of 500 Fortune global companies engage with PwC. Acceleration Centers (ACs) are PwC’s diverse, global talent hubs focused on enabling growth for the organization and value creation for our clients. The PwC Advisory Acceleration Center in Bangalore is part of our Advisory business in the US. The team is focused on developing a broader portfolio with solutions for Risk Consulting, Management Consulting, Technology Consulting, Strategy Consulting, Forensics as well as vertical specific solutions. PwC's high-performance culture is based on passion for excellence with focus on diversity and inclusion. You will collaborate with and receive support from a network of people to achieve your goals. We will also provide you with global leadership development frameworks and the latest in digital technologies to learn and excel in your career. At the core of our firm's philosophy is a simple construct: We care for our people. Globally PwC is ranked as the 3rd most attractive employer according to Universum. Our commitment to Responsible Business Leadership, Diversity & Inclusion, work-life flexibility, career coaching and learning & development makes our firm one of the best places to work, learn and excel. Apply to us if you believe PwC is the place to be. Now and in the future! Job Overview At PwC - AC, as an AWS Developer, the candidate will interact with Offshore Manager/ Onsite Business Analyst to understand the requirements and the candidate is responsible for end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake, Data hub in AWS. Years of Experience: Candidates with 2-4 years of hands-on experience Position Requirements Must Have : Experience in architecting and delivering highly scalable, distributed, cloud-based enterprise data solutions Strong expertise in the end-to-end implementation of Cloud data engineering solutions like Enterprise Data Lake, Data hub in AWS Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe,ETL data Pipelines, Big Data model techniques using Python / Java Experience in loading disparate data sets and translating complex functional and technical requirements into detailed design Should be aware of deploying Snowflake features such as data sharing, events and lake-house patterns Deep understanding of relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modeling) Strong AWS hands-on expertise with a programming background preferably Python/Scala Good knowledge of Big Data frameworks and related technologies - Experience in Hadoop and Spark is mandatory Strong experience in AWS compute services like AWS EMR, Glue and Sagemaker and storage services like S3, Redshift & Dynamodb Good experience with any one of the AWS Streaming Services like AWS Kinesis, AWS SQS and AWS MSK Troubleshooting and Performance tuning experience in Spark framework - Spark core, Sql and Spark Streaming Experience in one of the flow tools like Airflow, Nifi or Luigi Good knowledge of Application DevOps tools (Git, CI/CD Frameworks) - Experience in Jenkins or Gitlab with rich experience in source code management like Code Pipeline, Code Build and Code Commit Experience with AWS CloudWatch, AWS Cloud Trail, AWS Account Config, AWS Config Rules Strong understanding of Cloud data migration processes, methods and project lifecycle Good analytical & problem-solving skills Good communication and presentation skills Desired Knowledge / Skills Experience in building stream-processing systems, using solutions such as Storm or Spark-Streaming Experience in Big Data ML toolkits, such as Mahout, SparkML, or H2O Knowledge in Python Worked in Offshore / Onsite Engagements Experience in one of the flow tools like Airflow, Nifi or Luigi Experience in AWS services like STEP & Lambda Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA Additional Information Travel Requirements: Travel to client locations may be required as per project requirements. Line of Service: Advisory Horizontal: Technology Consulting Designation: Associate Location: Bangalore, India

Posted 1 day ago

Apply

0 years

0 Lacs

salem

On-site

Job Title: AI/ML & Cloud Intern Location: Ariyanoor, Salem About the Role: We are seeking a motivated and technically skilled intern with foundational knowledge in Artificial Intelligence (AI), Machine Learning (ML), AWS (Amazon Web Services) , and software development . As an intern, you will work closely with our engineering and data science teams to support ongoing projects, develop prototypes, and assist in deploying scalable solutions. Key Responsibilities: Assist in designing, developing, and testing ML models. Write clean, efficient, and well-documented code in Python or other relevant languages. Work with AWS services like S3, EC2, Lambda, SageMaker, etc. Participate in data collection, preprocessing, and analysis. Support model deployment, monitoring, and performance evaluation. Collaborate with cross-functional teams and attend project meetings. Document workflows, findings, and reports as required. Required Qualifications: Pursuing or recently completed a degree in Computer Science, Data Science, Engineering, or a related field. Understanding of ML algorithms, data preprocessing, and model evaluation techniques. Basic hands-Experience with programming languages like Python (preferred), Java, or others. Basic hands-on experience with AWS services Familiarity with libraries such as NumPy, pandas, scikit-learn, TensorFlow, or PyTorch. Good problem-solving skills and ability to work in a team environment. Preferred Qualifications: Completed relevant projects in AI/ML (academic or personal). Exposure to MLOps tools or CI/CD pipelines. Understanding of REST APIs and cloud deployment practices. Experience with version control (e.g., Git). Benefits: Mentorship from experienced professionals in AI/ML and Cloud. Exposure to real-world projects and production environments. Certificate of Internship & Letter of Recommendation (based on performance). Flexible work hours and potential for a full-time opportunity. Job Type: Full-time Pay: From ₹6,000.00 per month Work Location: In person

Posted 1 day ago

Apply

8.0 years

0 Lacs

hyderabad, telangana, india

On-site

Overview We are seeking an experienced Cloud Delivery engineer to work horizontally across our organization, collaborating with Cloud Engineering, Cloud Operations, and cross-platform teams. This role is crucial in ensuring that cloud resources are delivered according to established standards, with a focus on both Azure and AWS platforms. The Cloud Delivery engineer will be responsible for delivery of Data and AI platforms. Responsibilities Seeking a talented AWS artificial intelligence specialist with following skills. Provision the cloud resources, ensuring they adhere to approved architecture and organizational standards on both Azure and AWS. Collaborate closely with Cloud Engineering, Cloud Operations, and cross-platform teams to ensure seamless delivery of cloud resources on both Azure and AWS. Architecting, Designing, Developing and Implementing AI models and algorithms to address business challenges and improve processes. Experience in implementing Security Principles and Guardrails to AI Infrastructure. Identify and mitigate risks associated with cloud deployments and resource management in multi-cloud environments. Collaborating with cross-functional teams of data scientists, software developers, and business stakeholders to understand requirements and translate them into AI solutions. Create and maintain documentation for AI models, algorithms as Knowledge base article Participate in capacity planning and cost optimization initiatives for multi-cloud resources. Experience working with Vector DB (Datastax HCD) Conduct experiments to test and compare the effectiveness of different AI approaches. Troubleshooting and resolving issues related to AI systems. Deploying AI solutions into production environments and ensuring their integration with existing systems. Monitoring and evaluating the performance of AI systems, adjusting as necessary to improve outcomes Research and stay updated on the latest AI and machine learning technology advancements. Present findings and recommendations to stakeholders, including technical and non-technical audiences. Providing technical expertise and guidance on AI-related projects and initiatives. Expereince in creating Deployments for Intelligent Search, Intelligent Document Processing, Media Intelligence, Forecasting, AI for DevOps, Identity Verification, Content Moderation Experience in Amazon Bedrock, SageMaker, All Foundational AWS Resources under Compute, Networking, Security, App Runner, Lambda Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field; Master's degree preferred. 8+ years of experience in IT, with at least 4 years focused on cloud technologies, including substantial experience with both AWS & Azure. Strong understanding of AWS and Azure services, architectures, and best practices, particularly in Data and AI platforms. Certifications in both AWS (e.g., AWS Certified Solutions Architect - Professional) Azure (e.g., Azure Solutions Architect Expert). Experience in working with multiple teams cloud platforms. Demonstrated ability to work horizontally across different teams and platforms. Strong knowledge of cloud security principles and compliance requirements in multi-cloud environments. working experience of DevOps practices and tools applicable to both Azure and AWS. Experience with infrastructure as code (e.g., ARM templates, CloudFormation, Terraform). Proficiency in scripting languages (e.g., PowerShell, Bash, Python). Solid understanding of networking concepts and their implementation in Azure and AWS. Preferred: Cloud Architecture/specialist. Experience with hybrid cloud architectures. Familiarity with containerization technologies (e.g., Docker, Kubernetes) on both Azure and AWS.

Posted 1 day ago

Apply

5.0 years

0 Lacs

hyderabad, telangana, india

On-site

Position: AIML_Python Enginner Kothapet_Hyderabad _Hybrid.( 4 days a week onsite) Contract to hire fulltime to client. We’re mainly looking for someone who can go 4 days a week onsite in Hyderabad. Role Description: 5+ years of python experience for scripting ML workflows to deploy ML Pipelines as real time, batch, event triggered, edge deployment 4+ years of experience in using AWS sagemaker for deployment of ML pipelines and ML Models using Sagemaker piplines, Sagemaker mlflow, Sagemaker Feature Store..etc. 3+ years of development of apis using FastAPI, Flask, Django 3+ year of experience in ML frameworks & tools like scikit-learn, PyTorch, xgboost, lightgbm, mlflow. Solid understanding of ML lifecycle: model development, training, validation, deployment and monitoring Solid understanding of CI/CD pipelines specifically for ML workflows using bitbucket, Jenkins, Nexus, AUTOSYS for scheduling Experience with ETL process for ML pipelines with PySpark, Kafka, AWS EMR Serverless Good to have experience in H2O.ai Good to have experience in containerization using Docker and Orchestration using Kubernetes.

Posted 2 days ago

Apply

8.0 - 10.0 years

0 Lacs

pune, maharashtra, india

On-site

KONE Technology and Innovation Unit (KTI) is where the magic happens at KONE. It's where we combine the physical world - escalators and elevators - with smart and connected digital systems. We are changing and improving the way billions of people move within buildings every day. We are on a mission to expand and develop new digital solutions that are based on emerging technologies. KONE's vision is to create the Best People Flow experience by providing ease, effectiveness and experiences to our customers and users. In line with our strategy, Sustainable Success with Customers, we will focus on increasing the value we create for customers with new intelligent solutions and embed sustainability even deeper across all of our operations. By closer collaboration with customers and partners, KONE will increase the speed of bringing new services and solutions to the market. R&D unit in KTI is responsible for developing digital services at KONE. It's the development engine for our Digital Services such as , and . We are looking for Cloud Automation Architect with strong expertise in Automation on AWS cloud, UI, API, Data and ML Ops . The ideal candidate will bring hands-on technical leadership, architect scalable automation solutions, and drive end-to-end solution design for enterprise-grade use cases. You will collaborate with cross-functional teams including developers, DevOps engineers, product owners, and business stakeholders to deliver automation-first solutions. Role description: Solution Architecture & Design Architect and design automation solutions leveraging Cloud services and data management Define E2E architecture spanning cloud infrastructure, APIs, UI, and visualization layers Translate business needs into scalable, secure, and cost-effective technical solutions. Automation on Cloud Lead automation initiatives across infrastructure, application workflows, and data pipelines Implement operations use cases using Data Visualization and Cloud Automation Optimize automation for cost, performance, and security UI & API Integration Design and oversee development of APIs and microservices to support automation Guide teams on UI frameworks (React/Angular) for building dashboards and portals Ensure seamless integration between APIs, front-end applications, OCR and cloud services Data & ML Ops Define architecture for data ingestion, transformation, and visualization on AWS. Work with tools like Amazon QuickSight, Power BI to enable business insights Establish ML Ops best practices for data-driven decision-making Architect and implement end-to-end MLOps pipelines for training, deployment, and monitoring ML models. Use AWS services like SageMaker, Step Functions, Lambda, Kinesis, Glue, S3, Redshift for ML workflows. Establish best practices for model versioning, reproducibility, CI/CD for ML, and monitoring model drift. Team leading and Collaboration Mentor engineering teams on cloud-native automation practices Collaborate with product owners to prioritize and align technical solutions with business outcomes Drive POCs and innovation initiatives for automation at scale Requirements: 8-10 years of experience in cloud architecture, automation, and solution design Deep expertise in Python for automation Use cases and understanding of ML Ops Experience with data engineering & visualization tools Knowledge of UI frameworks (React, Angular, Vue) for portals and dashboards Expertise in AWS Cloud services for compute, data, and ML workloads Strong understanding of security, IAM, compliance, and networking in AWS Hands-on experience with MLOps pipelines (model training, deployment, monitoring). Read more on

Posted 2 days ago

Apply

12.0 - 16.0 years

0 Lacs

karnataka

On-site

Role Overview: As a Data Engineer, Senior Consultant for the Asia Pacific Region based out of Bangalore, you will bring deep expertise in data architecture, big data/data warehousing, and the ability to build large-scale data processing systems using the latest database and data processing technologies. Your role will be crucial in enabling VISA's clients in the region with foundational data capabilities needed to scale their data ecosystem for next-generation portfolio optimization and hyper-personalized marketing, especially within the BFSI space. You will work closely with VISA's market teams in AP, acting as a bridge between end-users and technology colleagues in Bangalore and the US to influence the development of global capabilities while providing local tools and technologies as required. Your consulting, communication, and presentation skills will be essential for collaborating with clients and internal cross-functional team members at various levels. You will also work on strategic client projects using VISA data, requiring proficiency in hands-on detailed design and coding using big data technologies. Key Responsibilities: - Act as a trusted advisor to VISA's clients, offering strategic guidance on designing and implementing scalable data architectures for advanced analytics and marketing use cases. - Collaborate with senior management, business units, and IT teams to gather requirements, align data strategies, and ensure successful adoption of solutions. - Integrate diverse data sources in batch and real-time to create a consolidated view such as a single customer view. - Design, develop, and deploy robust data platforms and pipelines leveraging technologies like Hadoop, Spark, modern ETL frameworks, and APIs. - Ensure data solutions adhere to client-specific governance and regulatory requirements related to data privacy, security, and quality. - Design target platform components, data flow architecture, and capacity requirements for scalable data architecture implementation. - Develop and deliver training materials, documentation, and workshops to upskill client teams and promote data best practices. - Review scripts for best practices, educate user base, and build training assets for beginner and intermediate users. Qualifications: - Bachelor's degree or higher in Computer Science, Engineering, or a related field. - 12+ years of progressive experience in data advisory, data architecture & governance, and data engineering roles. - Good understanding of Banking and Financial Services domains with familiarity in enterprise analytics data assets. - Experience in client consulting on data architecture and engineering solutions translating business needs into technical requirements. - Expertise in distributed data architecture, modern BI tools, and frameworks/packages used for Generative AI and machine learning model development. - Strong resource planning, project management, and delivery skills with a track record of successfully leading or contributing to large-scale data initiatives. (Note: Additional information section omitted as no details provided in the job description),

Posted 2 days ago

Apply

0.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Role Overview We are looking for a dynamic and experienced AI/ML Engineer to join our team as a Full Stack Developer with strong proficiency in React, Java/Python, and AWS Cloud technologies. This role will focus on building scalable applications and integrating AI capabilities for classification and summarization workflows that support product development and architectural initiatives. Key Responsibilities . Develop and maintain full-stack applications using React (frontend) and Python (backend). . Design and implement AI/ML models for text classification, document summarization, and intelligent automation. . Leverage AWS Cloud services (e.g., SageMaker, Lambda, S3, EC2, DynamoDB) for scalable model training, deployment, and application hosting. . Collaborate with cross-functional teams to embed AI solutions into product workflows. . Build reusable components and scalable architecture patterns aligned with enterprise standards. . Apply modern ML frameworks (e.g., TensorFlow, PyTorch, Hugging Face) to solve business problems. . Ensure high performance, security, and reliability of deployed solutions. . Participate in Agile ceremonies and contribute to sprint planning, reviews, and retrospectives. . Document technical designs, workflows, and model performance metrics. Required Qualifications . Bachelor's or Master's degree in Computer Science, Engineering, or related field. . 3-5 years of experience in full-stack development withAI/ML experience . Proficiency in React, Java, and Python. . Hands-on experience with NLP techniques for classification and summarization. . Strong working knowledge of AWS Cloud services and architecture. . Familiarity with CI/CD pipelines and DevOps practices. . Excellent problem-solving and communication skills. Preferred Skills . Experience with LLMs, RAG pipelines, or transformer-based models. . Exposure to MLOps practices and model deployment strategies. . Knowledge of enterprise-grade security and compliance standards. . Prior experience working in product development or architecture teams. . Deep learning:An understanding of neural network architectures, such as Convolutional Neural Networks (CNNs) for images and Recurrent Neural Networks (RNNs) for sequential data. . Natural Language Processing (NLP):Foundational knowledge of text processing techniques for building chatbots, sentiment analysis tools, and other language-based applications. . Computer vision:Skills in image processing and object detection, valuable for roles in areas like robotics or autonomous systems. . Generative AI and Prompt Engineering:An emerging area of demand, requiring skills in interacting with and fine-tuning large language models (LLMs).

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a candidate for the role in the Unified Intelligence Platform (UIP) team, you will be part of a mission to enable Salesforce teams to deeply understand and optimize their services and operations through data. The UIP is a modern, cloud-based data platform built with cutting-edge technologies like Spark, Trino, Airflow, DBT, Jupyter Notebooks, and more. Here are the details of the job responsibilities and qualifications we are looking for: **Role Overview:** You will be responsible for leading the architecture, design, development, and support of mission-critical data and platform services. Your role will involve driving self-service data pipelines, collaborating with product management teams, and architecting robust data solutions that enhance ingestion, processing, and quality. Additionally, you will be involved in promoting a service ownership model, developing data frameworks, implementing data quality services, building Salesforce-integrated applications, establishing CI/CD processes, and maintaining key components of the UIP technology stack. **Key Responsibilities:** - Lead the architecture, design, development, and support of mission-critical data and platform services - Drive self-service, metadata-driven data pipelines, services, and applications - Collaborate with product management and client teams to deliver scalable solutions - Architect robust data solutions with security and governance - Promote a service ownership model with telemetry and control mechanisms - Develop data frameworks and implement data quality services - Build Salesforce-integrated applications for data lifecycle management - Establish and refine CI/CD processes for seamless deployment - Oversee and maintain components of the UIP technology stack - Collaborate with third-party vendors for issue resolution - Architect data pipelines optimized for multi-cloud environments **Qualifications Required:** - Passionate about tackling big data challenges in distributed systems - Highly collaborative and adaptable, with a strong foundation in software engineering - Committed to engineering excellence and fostering transparency - Embraces a growth mindset and actively engages in support channels - Champions a Service Ownership model and minimizes operational overhead through automation - Experience with advanced data lake engines like Spark and Trino is a plus This is an opportunity to be part of a fast-paced, agile, and highly collaborative team that is defining the next generation of trusted enterprise computing. If you are passionate about working with cutting-edge technologies and solving complex data challenges, this role might be the perfect fit for you.,

Posted 2 days ago

Apply

0.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Category Software Engineering Job Details About Salesforce Salesforce is the #1 AI CRM, where humans with agents drive customer success together. Here, ambition meets action. Tech meets trust. And innovation isn't a buzzword - it's a way of life. The world of work as we know it is changing and we're looking for Trailblazers who are passionate about bettering business and the world through AI, driving innovation, and keeping Salesforce's core values at the heart of it all. Ready to level-up your career at the company leading workforce transformation in the agentic era You're in the right place! Agentforce is the future of AI, and you are the future of Salesforce. About the Role The mission of is to enable Salesforce teams to deeply understand and optimize their services, operations through data. UIP is a modern, trusted, turn-key data platform built with cutting-edge technologies with an exceptional user experience. Massive amounts of data are generated each day at Salesforce. It is critical to process and store large volumes of data efficiently and enable the users to discover and analyze the data easily. UIP is a modern, cloud-based data platform built on advanced data lake engines like Spark and Trino, incorporating a diverse suite of tools and technologies-including Airflow, DBT, Jupyter Notebooks, Sagemaker, Iceberg, and Open Metadata-for efficient data processing, storage, querying, and management. With curated datasets, we empower machine learning and AI use cases, enabling both model development and inference. Our team is fast-paced, agile, and highly collaborative, working across all areas of our tech stack to provide critical business services, support complex computing requirements, drive big data analytics, and pioneer cutting-edge engineering solutions in the cloud, defining the next generation of trusted enterprise computing. Who are we looking for Passionate about tackling big data challenges in distributed systems. Highly collaborative, working across teams to ensure customer success. Drive end-to-end projects that deliver high-performance, scalable, and maintainable solutions. Adaptable and versatile, taking on multiple roles as needed, whether as a Platform Engineer, Data engineer, Backend engineer, DevOps engineer, or support engineer. for the platform and customer success Strong foundation in software engineering, with the flexibility to work in any programming language. Committed to engineering excellence, consistently delivering high-quality products. Open and respectful communicator, fostering transparency and team alignment. Embraces a growth mindset, continuously learning and seeking self-improvement. Engages actively in support channels, providing insights and collaborating to support the community. Champions a Service Ownership model, minimizing operational overhead through automation, monitoring, and alerting best practices. Job Responsibilities: Lead the architecture, design, development, and support of mission-critical data and platform services, ensuring full ownership and accountability. Drive multiple self-service, metadata-driven data pipelines, services, and applications to streamline ingestion from diverse data sources into a multi-cloud, petabyte-scale data platform. Collaborate closely with product management and client teams to capture requirements and deliver scalable, adaptable solutions that drive success. Architect robust data solutions that enhance ingestion, processing, quality, and discovery, embedding security and governance from the start. Promote a service ownership model, designing solutions with extensive telemetry and control mechanisms to streamline governance and operational management. Develop data frameworks to simplify recurring data tasks, ensure best practices, foster consistency, and facilitate tool migration. Implement advanced data quality services seamlessly within the platform, empowering data analysts, engineers, and stewards to continuously monitor and uphold data standards. Build Salesforce-integrated applications to monitor and manage the full data lifecycle from a unified interface. Establish and refine CI/CD processes for seamless deployment of platform services across cloud environments. Oversee and maintain key components of the UIP technology stack, including Airflow, Spark, Trino, Iceberg, and Kubernetes. Collaborate with third-party vendors to troubleshoot and resolve platform-related software issues. Architect and orchestrate data pipelines and platform services optimized for multi-cloud environments (e.g., AWS, GCP). Unleash Your Potential When you join Salesforce, you'll be limitless in all areas of your life. Our benefits and resources support you to find balance and , and our AI agents accelerate your impact so you can . Together, we'll bring the power of Agentforce to organizations of all sizes and deliver amazing experiences that customers love. Apply today to not only shape the future - but to redefine what's possible - for yourself, for AI, and the world. Accommodations If you require assistance due to a disability applying for open positions please submit a request via this . Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that's inclusive, and free from discrimination. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications - without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education.

Posted 2 days ago

Apply

4.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Job Title: Data Scientist – Agentic AI & MLOps Location: Bangalore - Hybrid (3 days work from office, 2 days from home) About Us: Our client delivers next-generation security analytics and operations management. They secure organisations worldwide by staying ahead of cyber threats, leveraging AI-reinforced capabilities for unparalleled protection. Job Overview: We’re seeking a Senior Data Scientist to architect agentic AI solutions and own the full ML lifecycle—from proof-of-concept to production. You’ll operationalise LLMs, build agentic workflows, implement MLOps best practices, and design multi-agent systems for cybersecurity tasks. Key Responsibilities: Operationalise large language models and agentic workflows (LangChain, LangGraph, LlamaIndex, Crew.AI) to automate security decision-making and threat response. Design, deploy, and maintain multi-agent AI systems for log analysis, anomaly detection, and incident response. Build proof-of-concept GenAI solutions and evolve them into production-ready components on AWS (Bedrock, SageMaker, Lambda, EKS/ECS) using reusable best practices. Implement CI/CD pipelines for model training, validation, and deployment with GitHub Actions, Jenkins, and AWS CodePipeline. Manage model versioning with MLflow and DVC, set up automated testing, rollback procedures, and retraining workflows. Automate cloud infrastructure provisioning with Terraform and develop REST APIs and microservices containerised with Docker and Kubernetes. Monitor models and infrastructure through CloudWatch, Prometheus, and Grafana; analyse performance and optimise for cost and SLA compliance. Collaborate with data scientists, application developers, and security analysts to integrate agentic AI into existing security workflows. Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, AI or related quantitative discipline. 4+ years of software development experience, including 3+ years building and deploying LLM-based/agentic AI architectures. In-depth knowledge of generative AI fundamentals (LLMs, embeddings, vector databases, prompt engineering, RAG). Hands-on experience with LangChain, LangGraph, LlamaIndex, Crew.AI or equivalent agentic frameworks. Strong proficiency in Python and production-grade coding for data pipelines and AI workflows. Deep MLOps knowledge: CI/CD for ML, model monitoring, automated retraining, and production-quality best practices. Extensive AWS experience with Bedrock, SageMaker, Lambda, EKS/ECS, S3 (Athena, Glue, Snowflake preferred). Infrastructure as Code skills with Terraform. Experience building REST APIs, microservices, and containerization with Docker and Kubernetes. Solid data science fundamentals: feature engineering, model evaluation, data ingestion. Understanding of cybersecurity principles, SIEM data, and incident response. Excellent communication skills for both technical and non-technical audiences. Preferred Qualifications: AWS certifications (Solutions Architect, Developer Associate). Experience with Model Context Protocol (MCP) and RAG integrations. Familiarity with workflow orchestration tools (Apache Airflow). Experience with time series analysis, anomaly detection, and machine learning.

Posted 2 days ago

Apply

5.0 years

0 Lacs

india

On-site

We are looking for a Senior Python/Django Engineer to join our team, with exposure to MLOps and cloud infrastructure. This role is primarily focused on backend server development , where you will design, build, and maintain scalable services in a Django-based environment. In addition, you will take ownership of MLOps responsibilities , including deploying and supporting machine learning models, as well as managing cloud infrastructure on AWS. This is a hands-on engineering role that requires strong coding skills in Python/Django along with practical experience in cloud and MLOps workflows. Key Responsibilities Backend Development (Primary) Design, implement, and maintain Python/Django applications and APIs. Contribute directly to the server repository by developing new features and improving existing functionality. Ensure scalability, security, and performance of backend services. MLOps & Model Integration Deploy and manage ML/NLP models in production environments (e.g., SageMaker, Bedrock). Integrate ML endpoints with Django services and monitor performance. Support vector databases, LLM workflows, and related pipelines. Cloud Infrastructure (30–40% of role) Manage and optimize AWS resources (EC2, ECS, RDS, Lambda, VPC). Implement Infrastructure-as-Code using Terraform, Docker, and Ansible. Ensure infrastructure security, performance, and cost efficiency. Job Scheduling & Messaging Build and maintain asynchronous workflows using Celery, SQS, and SNS. Ensure timely and reliable data processing across pipelines. CI/CD & Automation Maintain and improve CI/CD pipelines (CircleCI preferred). Automate testing, deployments, and rollback strategies for backend and ML services. Monitoring & Logging Implement dashboards, alerts, and centralized logging (CloudWatch, Prometheus, Grafana, ELK). Proactively identify and resolve incidents in production. Qualifications 5+ years of experience in Python/Django backend development Strong understanding of REST APIs, relational databases, and server-side architecture Hands-on experience with AWS services and infrastructure-as-code (Terraform, Docker, Ansible) Familiarity with MLOps concepts and experience deploying ML models in production Experience with job scheduling/messaging tools (Celery, SQS, SNS) Proficiency in CI/CD pipelines and modern DevOps practices Ability to work independently and collaborate with cross-functional teams Nice to Have Exposure to NLP, vector databases, or LLM frameworks Knowledge of healthcare standards (HL7, Mirth Connect) and compliance requirements (HIPAA, SOC2) Experience scaling systems in a startup or high-growth environment Open-source contributions or community involvement

Posted 2 days ago

Apply

1.0 years

0 Lacs

india

On-site

Unlock the future of intelligent automation with a visionary AI Developer who excels in crafting agentic AI applications that effortlessly connect with structured databases. If you're passionate about pushing technological boundaries, possess deep expertise in LLMs, RAG, and orchestration frameworks, and thrive on transforming complex business needs into scalable, AI-driven solutions, this is your opportunity to make a transformative impact. Key Responsibilities Design and develop agentic bots powered by Large Language Models (LLMs) to interact with structured databases (SQL, Snowflake, etc.). Integrate AI agents with external tools and data sources using Model Context Protocol (MCP). Implement RAG pipelines using vector databases (e.g., Pinecone, FAISS, Weaviate, ChromaDB) to enable accurate information retrieval. Build and optimize conversational workflows leveraging frameworks such as LangChain, LlamaIndex, or similar orchestration tools. Develop natural language-to-SQL mapping systems that allow users to query structured databases via conversational interfaces. Collaborate with data engineers, data scientists, and business stakeholders to define requirements, integrate datasets, and deliver impactful AI use cases. Deploy, monitor, and optimize AI applications in production using MLOps practices (CI/CD, containerization, model versioning). Ensure AI systems comply with data security, privacy, and governance standards. Stay current with emerging AI/ML tools, frameworks, and best practices, proactively bringing innovation into solutions. Qualifications Bachelor’s or Master’s in Computer Science, AI/ML, Data Science, or a related field with a minimum of 1+ years in AI/ML development. Strong programming skills in Python with hands-on experience building APIs and backend services. Hands-on experience with Model Context Protocol (MCP) to extend AI agents’ capabilities with external tools and context. Experience with LLMs and GenAI APIs (OpenAI, Anthropic, Hugging Face, Azure OpenAI), and AI orchestration frameworks (LangChain, LlamaIndex, or equivalent). Experience integrating AI applications with structured databases (Snowflake, Postgres, MySQL) and working with SQL. Familiarity with vector databases and embedding models for semantic search and RAG. Preferred Skills Knowledge of MLOps practices (CI/CD, Docker, Kubernetes, MLflow). Experience deploying AI apps on AWS (SageMaker, Lambda, S3) or other cloud platforms. Knowledge of Power BI or Tableau for integrating AI outputs into business intelligence dashboards. Prior experience building chatbots, virtual assistants, or intelligent automation solutions. Benefits Health Insurance, Accident Insurance. The salary will be determined based on several factors, including, but not limited to, location, relevant education, qualifications, experience, technical skills, and business needs Additional Responsibilities Participate in OP monthly team meetings, and participate in team-building efforts. Contribute to OP technical discussions, peer reviews, etc. Contribute content and collaborate via the OP-Wiki/Knowledge Base. Provide status reports to OP Account Management as requested. About Us OP is a technology consulting and solutions company, offering advisory and managed services, innovative platforms, and staffing solutions across a wide range of fields — including AI, cyber security, enterprise architecture, and beyond. Our most valuable asset is our people: dynamic, creative thinkers, who are passionate about doing quality work. As a member of the OP team, you will have access to industry-leading consulting practices, strategies & and technologies, innovative training & education. An ideal OP team member is a technology leader with a proven track record of technical excellence and a strong focus on process and methodology.

Posted 2 days ago

Apply

0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Requirements Role/Job Title: Python Developer Function/Department: Information technology Job Purpose As a Backend Developer, you will play a crucial role in designing, developing, and maintaining complex backend systems. You will work closely with cross-functional teams to deliver high-quality software solutions and drive the technical direction of our projects. Your experience and expertise will be vital in ensuring the performance, scalability, and reliability of our applications. Roles and Responsibilities: Solid understanding of backend performance optimization and debugging. Formal training or certification on software engineering concepts and proficient applied experience Strong hands-on experience with Python Experience in developing microservices using Python with FastAPI. Commercial experience in both backend and frontend engineering Hands-on experience with AWS Cloud-based applications development, including EC2, ECS, EKS, Lambda, SQS, SNS, RDS Aurora MySQL & Postgres, DynamoDB, EMR, and Kinesis. Strong engineering background in machine learning, deep learning, and neural networks. Experience with containerized stack using Kubernetes or ECS for development, deployment, and configuration. Experience with Single Sign-On/OIDC integration and a deep understanding of OAuth, JWT/JWE/JWS. Knowledge of AWS SageMaker and data analytics tools. Proficiency in frameworks TensorFlow, PyTorch, or similar. Educational Qualification (Fulltime) Bachelor of Technology (B.Tech) / Bachelor of Science (B.Sc) / Master of Science (M.Sc) /Master of Technology (M.Tech) / Bachelor of Computer Applications (BCA) / Master of Computer Applications (MCA)

Posted 2 days ago

Apply

3.0 - 6.0 years

2 - 4 Lacs

cochin

On-site

We are seeking talented AWS AI Engineers / Developers to join our team in building innovative agentic AI solutions. This role focuses on implementing intelligent, automated workflows that enhance clinical data processing and decision-making, leveraging the power of AWS services and cutting-edge open-source AI frameworks. Experience in healthcare or Life Sciences, with a strong focus on regulatory compliance, is highly desirable. Requirements Design, develop, and implement agentic AI workflows for clinical source verification, discrepancy detection, and intelligent query generation to enhance data quality and reliability. Build and integrate LLM-powered agents using AWS Bedrock in combination with open-source frameworks like LangChain and AutoGen. Create robust, event-driven pipelines leveraging AWS Lambda, Step Functions, and EventBridge for seamless orchestration of data and processes. Optimize prompt engineering techniques, implement retrieval-augmented generation (RAG), and facilitate efficient multi-agent communication. Integrate AI agents securely with external applications and services through well-defined, secure APIs. Collaborate with Data Engineering teams to design and maintain PHI/PII-safe data ingestion pipelines, ensuring compliance with privacy regulations. Continuously monitor, test, and fine-tune AI workflows focusing on improving accuracy, reducing latency, and ensuring compliance with industry standards. Document solutions and contribute to the establishment of best practices and governance frameworks. What we Expect from you? Bachelor’s degree in Computer Science, Engineering, or a related technical field. 3–6 years of hands-on experience in AI/ML engineering, with specific expertise in LLM-based or agentic AI development and deployment. Strong programming skills in Python and/or TypeScript. Practical experience with agentic AI frameworks such as LangChain, LlamaIndex, or AutoGen. Solid understanding of AWS AI services including Bedrock, SageMaker, Textract, and Comprehend Medical. Proven experience in API development and integration, as well as designing event-driven system architectures. Knowledge of healthcare or Life Sciences domains, including regulatory compliance requirements (HIPAA, GDPR, etc.), is preferred. Strong problem-solving mindset, with a focus on experimentation, iteration, and delivering innovative solutions rapidly. What you've got? Effective communication and collaboration skills, with the ability to work closely with cross-functional teams. Passion for emerging AI technologies and cloud innovations. Prior exposure to clinical or life sciences data workflows is a strong advantage.

Posted 2 days ago

Apply

7.0 years

12 - 24 Lacs

hyderābād

On-site

Core Requirements: 7+ years of total IT experience, including at least 3 years in MLOps . Experience designing and implementing MLOps pipelines using: MLflow , Apache Airflow CI/CD tools Configuration tools : Ansible, Terraform Strong proficiency in: Python , PySpark , SQL Machine Learning (ML) , Deep Learning , NLP Cloud : AWS (preferred), but Azure acceptable AWS tools : SageMaker, BedRock, EC2, Lambda, S3, Glue Tables DataBricks Solid knowledge of: ML lifecycle : Data ingestion → Training → Deployment → Monitoring Containerization , Versioning ML model deployment and monitoring Experience working with models using RNN , CNN , GNN , GAN Role Responsibilities: Build and maintain robust MLOps pipelines Automate testing, validation, and deployment of models Work with Data Scientists to translate prototypes into production Optimize ML code and monitor performance Collaborate with: Data Scientists Data Engineers Architects Document processes and workflows Communicate effectively with technical teams Preferred Experience (Nice-to-Have): Exposure to both AWS and Azure (AWS preferred) Familiarity with AI/ML Ops as a primary skillset Infrastructure Ops as a secondary skill Soft Skills: Strong communication skills (verbal and written) Ability to interpret data/metrics Analytical mindset Team collaboration Job Type: Full-time Pay: ₹100,000.00 - ₹200,000.00 per month Experience: ML Ops: 7 years (Required) Python, PySpark, SQL: 7 years (Required) Machine Learning (ML), Deep Learning, NLP: 7 years (Required) AWS tools: SageMaker, BedRock, EC2, Lambda, S3, Glue Tables: 7 years (Required) Databricks: 7 years (Required) Work Location: In person

Posted 2 days ago

Apply

5.0 years

0 Lacs

hyderābād

On-site

Job Title: AI EngineerExperience Level: +5 YearsLocation: Hyderabad Job Summary We are seeking a highly experienced and innovative Senior AI Engineer to join our rapidly expanding AI/ML team. With 5+ years of hands-on experience, you will be instrumental in the end-to-end lifecycle of AI/ML solutions – from research and prototyping to developing, deploying, and maintaining production-grade intelligent systems. You will work on challenging problems, collaborate closely with data scientists, product managers, and software engineers, and drive the adoption of cutting-edge AI technologies across our platforms. Key Responsibilities Model Development & Implementation: Design, develop, train, and optimize robust and scalable machine learning models (e.g., deep learning, classical ML algorithms) for various applications. Production Deployment (MLOps): Build and maintain end-to-end MLOps pipelines for model deployment, monitoring, versioning, and retraining, ensuring reliability and performance in production environments. Data Engineering for AI: Work with large, complex datasets, performing data cleaning, feature engineering, and data pipeline development to prepare data for model training and inference. Research & Prototyping: Explore and evaluate new AI/ML technologies, algorithms, and research papers to identify opportunities for innovation and competitive advantage. Rapidly prototype and test new ideas. Performance Optimization: Optimize AI/ML models and inference systems for speed, efficiency, and resource utilization. Collaboration & Communication: Partner closely with Data Scientists to transition research prototypes into production-ready solutions. Collaborate with Software Engineers to integrate AI models into existing products and platforms. Communicate complex AI concepts to both technical and non-technical stakeholders. Code Quality & Best Practices: Write clean, maintainable, and well-documented code. Advocate for and implement software engineering best practices within the AI/ML lifecycle. Monitoring & Maintenance: Implement robust monitoring for model performance, data drift, and system health in production, and troubleshoot issues as they arise. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. 5+ years of hands-on experience as an AI Engineer, Machine Learning Engineer, or a similar role focused on building and deploying AI/ML solutions. Strong proficiency in Python and its relevant ML/data science libraries (e.g., NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch). Extensive experience with at least one major deep learning framework such as TensorFlow, PyTorch, or Keras. Solid understanding of machine learning principles, algorithms (e.g., regression, classification, clustering, ensemble methods), and statistical modeling. Experience with cloud platforms (e.g., AWS, Azure, GCP) and their AI/ML services (e.g., SageMaker, Azure ML, Vertex AI). Proven experience with MLOps concepts and tools for model lifecycle management (e.g., MLflow, Kubeflow, Sagemaker MLOps, Azure ML Pipelines). Strong SQL skills for data manipulation, analysis, and feature extraction from relational databases. Experience with data preprocessing, feature engineering, and handling large datasets. Familiarity with software development best practices including version control (Git), code reviews, and CI/CD. Excellent problem-solving skills and the ability to work on complex, ambiguous problems. Strong communication and collaboration skills to work effectively with cross-functional teams. Your future duties and responsibilities Required qualifications to be successful in this role Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 2 days ago

Apply

11.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Company Description KPMG in India, established in 1993, has quickly developed a strong competitive presence. Operating from offices across 14 cities including Ahmedabad, Bengaluru, Chandigarh, Chennai, Gurugram, Hyderabad, Jaipur, Kochi, Kolkata, Mumbai, Noida, Pune, Vadodara, and Vijayawada, KPMG serves over 2700 domestic clients. Our services are industry-tailored, technology-enabled, and delivered by highly talented professionals. We provide value-added services through a global approach and are known for our performance-based advisory services. Location - Bangalore Role Description Senior AI Engineer | GenAI | NLP | Cloud AI Experience: 4 - 9 Designation: Assistant Manager / Consultant Scalable AI with LLMs, RAG, embeddings Azure ML, SageMaker, Vertex AI Microservices, MLOps, Python, LangChain Leading teams & driving Responsible AI AI PMO | Program Management | GenAI | Cloud Experience: 6–11 years in program/project management, preferably in tech or consulting Designation: Assistant Manager / Manager Driving AI initiatives across POCs & platforms Budget/GPU tracking, cloud cost optimization Standardizing MLOps docs & delivery playbooks Reporting KPIs, risks, and program health AI Full Stack Developer | GenAI | Web Apps Job Description: Experience: 4–9 years Designation: Consultant / Senior Consultant React/Next.js + FastAPI for AI-powered apps LLMs, RAG, vector DBs (FAISS, Pinecone) Cloud-native deployments with Docker/K8s End-to-end integration with AI/DevOps teams AI DevOps Engineer | MLOps | LLMs | Cloud Experience: 3.5 to 6 years Designation: Consultant CI/CD for ML/LLM models (MLflow, Kubeflow) Model lifecycle, vector DBs, inference services Monitoring with Prometheus, Grafana Cloud deployments with Terraform & Kubernetes

Posted 2 days ago

Apply

7.0 years

0 Lacs

hyderabad, telangana, india

On-site

Primary skills needed - AI & ML Ops Secondary - Infra Ops Tertiary - AWS/Azure Experience and Skills - Experience: 7+ years in data engineering, 3+ years as an MLOps engineer. - MLOps Pipelines: Design, develop, and implement using tools like MLflow and Apache Airflow. - Cloud Experience: Working experience with AWS and Databricks. - Technical Proficiency: Strong in Python, PySpark, SQL, machine learning, NLP, deep learning, and AWS services (SageMaker, BedRock, EC2, Lambda, S3, Glue Tables). - Configuration Management: Experience with Ansible, Terraform, and building CI/CD pipelines. Responsibilities - Automate ML Lifecycle: From data ingestion to deployment and monitoring. - Collaborate with Data Scientists: Understand data science lifecycle, translate needs into production-ready solutions. - Model Deployment and Monitoring: Strong in ML model deployment, AI ML pipeline, and model monitoring. - Communication: Excellent written and verbal communication skills. Desired Qualities - Analytical Mindset: Ability to interpret data and metrics. - Collaboration: Technical proficiency and ability to work with technical teams. - ML Techniques: Familiarity with algorithms like RNN, CNN, GNN, GAN.

Posted 2 days ago

Apply

Exploring Sagemaker Jobs in India

Sagemaker is a rapidly growing field in India, with many companies looking to hire professionals with expertise in this area. Whether you are a seasoned professional or a newcomer to the tech industry, there are plenty of opportunities waiting for you in the sagemaker job market.

Top Hiring Locations in India

If you are looking to land a sagemaker job in India, here are the top 5 cities where companies are actively hiring for roles in this field:

  • Bangalore
  • Hyderabad
  • Pune
  • Mumbai
  • Chennai

Average Salary Range

The salary range for sagemaker professionals in India can vary based on experience and location. On average, entry-level professionals can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

In the sagemaker field, a typical career progression may look like this:

  • Junior Sagemaker Developer
  • Sagemaker Developer
  • Senior Sagemaker Developer
  • Sagemaker Tech Lead

Related Skills

In addition to expertise in sagemaker, professionals in this field are often expected to have knowledge of the following skills:

  • Machine Learning
  • Data Science
  • Python programming
  • Cloud computing (AWS)
  • Deep learning

Interview Questions

Here are 25 interview questions that you may encounter when applying for sagemaker roles, categorized by difficulty level:

  • Basic:
  • What is Amazon SageMaker?
  • How does SageMaker differ from traditional machine learning?
  • What is a SageMaker notebook instance?

  • Medium:

  • How do you deploy a model in SageMaker?
  • Can you explain the process of hyperparameter tuning in SageMaker?
  • What is the difference between SageMaker Ground Truth and SageMaker Processing?

  • Advanced:

  • How would you handle model drift in a SageMaker deployment?
  • Can you compare SageMaker with other machine learning platforms in terms of scalability and flexibility?
  • How do you optimize a SageMaker model for cost efficiency?

Closing Remark

As you explore opportunities in the sagemaker job market in India, remember to hone your skills, stay updated with industry trends, and approach interviews with confidence. With the right preparation and mindset, you can land your dream job in this exciting and evolving field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies