Home
Jobs

661 Sagemaker Jobs - Page 11

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 2.0 years

0 Lacs

Salem, Tamil Nadu

On-site

Indeed logo

Job description About The Role: As a Subject Matter Expert (SME) in Data Annotation, you will play a critical role in ensuring the highest quality of data labelling across various projects. Technical and Domain expert Mentor annotation teams Establish annotation guidelines Conduct quality audits Support client and internal teams with domain-specific insights. Tools Experience Expected: CVAT, Amazon SageMaker, BasicAI, LabelStudio, SuperAnnotate, Loft, Cogito, Roboflow, Slicer3D, Mindkosh, Kognic, Praat Annotation Expertise Areas: Image, Video: Bounding Box, Polygon, Semantic Segmentation, Keypoints 3D Point Cloud: LiDAR Annotation, 3D Cuboids, Semantic Segmentation Audio Annotation: Speech, Noise Labelling, Transcription Text Annotation: NER, Sentiment Analysis, Intent Detection, NLP tasks Exposure to LLMs and Generative AI data annotation tasks (prompt generation, evaluation) Key Responsibilities: Act as a Subject Matter Expert to guide annotation standards, processes, and best practices. Create, refine, and maintain detailed annotation guidelines and ensure adherence across teams. Conduct quality audits and reviews to maintain high annotation accuracy and consistency. Provide domain-specific training to Data Annotators and Team Leads. Collaborate closely with Project Managers, Data Scientists, and Engineering teams for dataset quality assurance. Resolve complex annotation issues and edge cases with data-centric solutions. Stay current with advancements in AI/ML and annotation technologies and apply innovative methods. Support pre-sales and client discussions as an annotation domain expert, when required. Key Performance Indicators (KPIs): Annotation quality and consistency across projects Successful training and upskilling of annotation teams Timely resolution of annotation queries and technical challenges Documentation of guidelines, standards Client satisfaction on annotation quality benchmarks Qualifications: Bachelor's or master's degree in a relevant field (Computer Science, AI/ML, Data Science, Linguistics, Engineering, etc.) 3–6 years of hands-on experience in data annotation, with exposure to multiple domains (vision, audio, text, 3D). Deep understanding of annotation processes, tool expertise, and quality standards. Prior experience in quality control, QA audits, or SME role in annotation projects. Strong communication skills to deliver training, documentation, and client presentations. Familiarity with AI/ML workflows, data preprocessing, and dataset management concepts is highly desirable. Work Location: In-person (Salem, Tamil Nadu) Schedule: Day Shift Monday to Saturday Weekend availability required Supplemental Pay: Overtime pays Performance bonus Shift allowance Yearly bonus Languages Required : Tamil(oral communication must),English,Hindi(preffered) Contact :9489979523(HR) Job Type: Full-time Pay: ₹25,000.00 - ₹30,000.00 per month Schedule: Day shift Experience: data annotation: 2 years (Preferred) Work Location: In person Apply Now

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

We are seeking a talented and versatile Analytics & AI Specialist to join our dynamic team. This role combines expertise in General Analytics, Artificial Intelligence (AI), Generative AI (GenAI), forecasting techniques, and client management to deliver innovative solutions that drive business success. The ideal candidate will work closely with clients, leverage AI technologies to enhance data-driven decision-making, and apply forecasting models to predict business trends and outcomes. AI & Machine Learning: Experience with machine learning frameworks and libraries (e.g., TensorFlow, scikit-learn, PyTorch, Keras). Knowledge of Generative AI (GenAI) tools and technologies, including GPT models, GANs (Generative Adversarial Networks), and transformer models. Familiarity with AI cloud platforms (e.g., Google AI, AWS SageMaker, Azure AI). Forecasting: Expertise in time series forecasting methods (e.g., ARIMA, Exponential Smoothing, Prophet) and machine learning-based forecasting models. Experience applying predictive analytics and building forecasting models for demand, sales, and resource planning. Data Visualization & Reporting: Expertise in creating interactive reports and dashboards with tools like Tableau, Power BI, or Google Data Studio. Ability to present complex analytics and forecasting results in a clear and compelling way to stakeholders. Client Management & Communication: Strong client-facing skills with the ability to manage relationships and communicate complex technical concepts to non-technical audiences. Ability to consult and guide clients on best practices for implementing AI-driven solutions. Excellent written and verbal communication skills for client presentations, technical documentation, and report writing. Additional Skills: Project Management: Experience managing data analytics projects from inception to completion, ensuring deadlines and objectives are met. Cloud Platforms: Experience with cloud platforms (AWS, GCP, Azure) for deploying AI models, handling large datasets, and performing distributed computing. Business Acumen: A strong understanding of business KPIs and the ability to align AI and analytics projects with client business goals. Show more Show less

Posted 1 week ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. At Salesforce , we're not just leading with technology, we're inspiring the future of business with AI + Data + CRM . As a Customer Company, we help businesses blaze new trails and build meaningful connections. If you're passionate about driving change and innovating at scale, this is your opportunity! We're Hiring: Director, Data Science & ML Engineering - Marketing AI/ML Algorithms. As part of the Marketing AI/ML Algorithms team , you'll play a pivotal role in driving AI-powered marketing initiatives. We're seeking an experienced leader in data science, data engineering, and machine learning (ML) engineering to help us shape the future of marketing at Salesforce. With your expertise, you’ll lead global teams and build cutting-edge AI/ML solutions to optimize marketing efforts and customer experiences at scale. What You’ll Do Lead & Innovate: Manage data scientists, data engineers, and ML engineers to develop and deploy AI/ML models, pipelines, and algorithms at scale. Transform Marketing: Design and deliver ML algorithms and statistical models to enhance marketing strategies and personalized customer experiences. Drive Full Lifecycle Development: From ideation and data exploration to deployment, monitor, and optimize AI/ML models in production. Engineer Excellence: Oversee the development of scalable data pipelines, integrating data from various sources and leveraging advanced platforms like Snowflake and AWS. Optimize for Impact: Create a culture of innovation and excellence while ensuring reliable delivery of AI/ML solutions to meet business needs. Lead by Example: Inspire creativity, innovation, and high performance while building a strong technical team that thrives on collaboration. What You’ll Bring Advanced Expertise: 15-20+ years in data science and machine learning, with a deep understanding of algorithms, including deep learning, regression models, and neural networks. Leadership Excellence: 8-10+ years of experience managing high-performing teams and large-scale AI/ML projects. A track record of driving talent recruitment and retention in technical teams. Tech Mastery: Proficient in SQL, Python, Java, PySpark, and experienced with Snowflake, AWS SageMaker, DBT, and Airflow. Scalability & Efficiency: Experience building fault-tolerant, high-performing data pipelines and ensuring seamless AI/ML algorithm execution in production. Strategic Thinker: Strong communicator who simplifies complex problems and develops impactful, creative solutions. Bonus Points: Experience with Salesforce products and B2B customer data is a plus! Why Salesforce? Work in a dynamic, values-driven environment where AI-powered innovation is at the heart of everything we do. Collaborate with industry leaders on projects that drive real business transformation. Unlock career growth opportunities and help shape the future of AI and marketing at one of the world's most trusted companies. Are You Ready to Join Us? If you’re passionate about AI , machine learning , and creating cutting-edge solutions at scale, this is your chance to make an impact. Apply now to be a part of our Trailblazer journey at Salesforce! Let’s shape the future of business together. Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Working as an AI/ML Engineer at Navtech, you will: * Design, develop, and deploy machine learning models for classification, regression, clustering, recommendations, or NLP tasks. Clean, preprocess, and analyze large datasets to extract meaningful insights and features. Work closely with data engineers to develop scalable and reliable data pipelines. Experiment with different algorithms and techniques to improve model performance. Monitor and maintain production ML models, including retraining and model drift detection. Collaborate with software engineers to integrate ML models into applications and services. Document processes, experiments, and decisions for reproducibility and transparency. Stay current with the latest research and trends in machine learning and AI. Who Are We Looking for Exactly? * 2–4 years of hands-on experience in building and deploying ML models in real-world applications. Strong knowledge of Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or similar. Experience with data preprocessing, feature engineering, and model evaluation techniques. Solid understanding of ML concepts such as supervised and unsupervised learning, overfitting, regularization, etc. Experience working with Jupyter, pandas, NumPy, and visualization libraries like Matplotlib or Seaborn. Familiarity with version control (Git) and basic software engineering practices. You consistently demonstrate strong verbal and written communication skills as well as strong analytical and problem-solving abilities You should have a master’s degree /Bachelors (BS) in computer science, Software Engineering, IT, Technology Management or related degrees and throughout education in English medium. We’ll REALLY love you if you: * Have knowledge of cloud platforms (AWS, Azure, GCP) and ML services (SageMaker, Vertex AI, etc.) Have knowledge of GenAI prompting and hosting of LLMs. Have experience with NLP libraries (spaCy, Hugging Face Transformers, NLTK). Have familiarity with MLOps tools and practices (MLflow, DVC, Kubeflow, etc.). Have exposure to deep learning and neural network architectures. Have knowledge of REST APIs and how to serve ML models (e.g., Flask, FastAPI, Docker). Why Navtech? * Performance review and Appraisal Twice a year. Competitive pay package with additional bonus & benefits. Work with US, UK & Europe based industry renowned clients for exponential technical growth. Medical Insurance cover for self & immediate family. Work with a culturally diverse team from different geographies. About Us Navtech is a premier IT software and Services provider. Navtech’s mission is to increase public cloud adoption and build cloud-first solutions that become trendsetting platforms of the future. We have been recognized as the Best Cloud Service Provider at GoodFirms for ensuring good results with quality services. Here, we strive to innovate and push technology and service boundaries to provide best-in-class technology solutions to our clients at scale. We deliver to our clients globally from our state-of-the-art design and development centers in the US & Hyderabad. We’re a fast-growing company with clients in the United States, UK, and Europe. We are also a certified AWS partner. You will join a team of talented developers, quality engineers, product managers whose mission is to impact above 100 million people across the world with technological services by the year 2030. Navtech is looking for a AI/ML Engineer to join our growing data science and machine learning team. In this role, you will be responsible for building, deploying, and maintaining machine learning models and pipelines that power intelligent products and data-driven decisions. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 10 S&P Global Commodity Insights The Role: Senior Cloud Engineer The Location: Hyderabad, Gurgaon The Team: The Cloud Engineering Team is responsible for designing, implementing, and maintaining cloud infrastructure that supports various applications and services within the S&P Global Commodity Insights organization. This team collaborates closely with data science, application development, and security teams to ensure the reliability, security, and scalability of our cloud solutions. The Impact: As a Cloud Engineer, you will play a vital role in deploying and managing cloud infrastructure that supports our strategic initiatives. Your expertise in AWS and cloud technologies will help streamline operations, enhance service delivery, and ensure the security and compliance of our environments. What’s in it for you: This position offers the opportunity to work on cutting-edge cloud technologies and collaborate with various teams across the organization. You will gain exposure to multiple S&P Commodity Insights Divisions and contribute to projects that have a significant impact on the business. This role opens doors for tremendous career opportunities within S&P Global. Responsibilities Design and deploy cloud infrastructure using core AWS services such as EC2, S3, RDS, IAM, VPC, and CloudFront, ensuring high availability and fault tolerance. Deploy, manage, and scale Kubernetes clusters using Amazon EKS, ensuring high availability, secure networking, and efficient resource utilization. Develop secure, compliant AWS environments by configuring IAM roles/policies, KMS encryption, security groups, and VPC endpoints. Configure logging, monitoring, and alerting with CloudWatch, CloudTrail, and GuardDuty to support observability and incident response. Enforce security and compliance controls via IAM policy audits, patching schedules, and automated backup strategies. Monitor infrastructure health, respond to incidents, and maintain SLAs through proactive alerting and runbook execution. Collaborate with data science teams to deploy machine learning models using Amazon SageMaker, managing model training, hosting, and monitoring. Automate and schedule data processing workflows using AWS Glue, Step Functions, Lambda, and EventBridge to support ML pipelines. Optimize infrastructure for cost and performance using AWS Compute Optimizer, CloudWatch metrics, auto-scaling, and Reserved Instances/Savings Plans. Write and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation for repeatable, automated infrastructure deployments. Implement disaster recovery, backups, and versioned deployments using S3 versioning, RDS snapshots, and CloudFormation change sets. Set up and manage CI/CD pipelines using AWS services like CodePipeline, CodeBuild, and CodeDeploy to support application and model deployments. Manage and optimize real-time inference pipelines using SageMaker Endpoints, Amazon Bedrock, and Lambda with API Gateway to ensure reliable, scalable model serving. Support containerized AI workloads using Amazon ECS or EKS, including model serving and microservices for AI-based features. Collaborate with SecOps and SRE teams to uphold security baselines, manage change control, and conduct root cause analysis for outages. Participate in code reviews, design discussions, and architectural planning to ensure scalable and maintainable cloud infrastructure. Maintain accurate and up-to-date infrastructure documentation, including architecture diagrams, access control policies, and deployment processes. Collaborate cross-functionally with application, data, and security teams to align cloud solutions with business and technical goals. Stay current with AWS and AI/ML advancements, suggesting improvements or new service adoption where applicable. What We’re Looking For Strong understanding of cloud infrastructure, particularly AWS services and Kubernetes. Proven experience in deploying and managing cloud solutions in a collaborative Agile environment. Ability to present technical concepts to both business and technical audiences. Excellent multi-tasking skills and the ability to manage multiple projects under tight deadlines. Basic Qualifications BA/BS in computer science, information technology, or a related field. 5+ years of experience in cloud engineering or related roles, specifically with AWS. Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Knowledge of container orchestration and microservices architecture. Familiarity with security best practices in cloud environments. Preferred Qualifications Extensive Hands-on Experience with AWS Services. Excellent problem-solving skills and the ability to work independently as well as part of a team. Strong communication skills and the ability to influence stakeholders at all levels. Experience with greenfield projects and building cloud infrastructure from scratch. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315801 Posted On: 2025-06-05 Location: Hyderabad, Telangana, India Show more Show less

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and performing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a crucial team member that assists in design and development of the data pipeline Build data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications: Masters degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Must-Have Skills: Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Solid understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Good understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Professional Certifications Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Description The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About The Team About AWS: Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications Experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Proven track record in designing and developing end-to-end Machine Learning and Generative AI solutions, from conception to deployment Experience in applying best practices and evaluating alternative and complementary ML and foundational models suitable for given business contexts Foundational knowledge of data modeling principles, statistical analysis methodologies, and demonstrated ability to extract meaningful insights from complex, large-scale datasets Preferred Qualifications AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in developing and optimizing foundation models (LLMs), including fine-tuning, continuous training, small language model development, and implementation of Agentic AI systems Experience in developing and deploying end-to-end machine learning and deep learning solutions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS Proserve IN – Haryana Job ID: A2943450 Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience Position: Senior Associate Industry: Telecom / Network Analytics / Customer Analytics Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Proven experience with telco data including call detail records (CDRs), customer churn models, and network analytics Deep understanding of predictive modeling for customer lifetime value and usage behavior Experience working with telco clients or telco data platforms (like Amdocs, Ericsson, Nokia, AT&T etc) Proficiency in machine learning techniques, including classification, regression, clustering, and time-series forecasting Strong command of statistical techniques (e.g., logistic regression, hypothesis testing, segmentation models) Strong programming in Python or R, and SQL with telco-focused data wrangling Exposure to big data technologies used in telco environments (e.g., Hadoop, Spark) Experience working in the telecom industry across domains such as customer churn prediction, ARPU modeling, pricing optimization, and network performance analytics Strong communication skills to interface with technical and business teams Nice To Have Exposure to cloud platforms (Azure ML, AWS SageMaker, GCP Vertex AI) Experience working with telecom OSS/BSS systems or customer segmentation tools Familiarity with network performance analytics, anomaly detection, or real-time data processing Strong client communication and presentation skills Roles And Responsibilities Assist analytics projects within the telecom domain, driving design, development, and delivery of data science solutions Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity like data quality, model robustness, and explainability for deployments. Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less

Posted 1 week ago

Apply

2.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Requirements Role/Job Title: Developer Function/Department: Information technology Job Purpose As a Backend Developer, you will play a crucial role in designing, developing, and maintaining complex backend systems. You will work closely with cross-functional teams to deliver high-quality software solutions and drive the technical direction of our projects. Your experience and expertise will be vital in ensuring the performance, scalability, and reliability of our applications. Roles and Responsibilities: Solid understanding of backend performance optimization and debugging. Formal training or certification on software engineering concepts and proficient applied experience Strong hands-on experience with Python Experience in developing microservices using Python with FastAPI. Commercial experience in both backend and frontend engineering Hands-on experience with AWS Cloud-based applications development, including EC2, ECS, EKS, Lambda, SQS, SNS, RDS Aurora MySQL & Postgres, DynamoDB, EMR, and Kinesis. Strong engineering background in machine learning, deep learning, and neural networks. Experience with containerized stack using Kubernetes or ECS for development, deployment, and configuration. Experience with Single Sign-On/OIDC integration and a deep understanding of OAuth, JWT/JWE/JWS. Knowledge of AWS SageMaker and data analytics tools. Proficiency in frameworks TensorFlow, PyTorch, or similar. Educational Qualification (Fulltime) Bachelor of Technology (B.Tech) / Bachelor of Science (B.Sc) / Master of Science (M.Sc) /Master of Technology (M.Tech) / Bachelor of Computer Applications (BCA) / Master of Computer Applications (MCA) Experience : 2-5 Years Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Ethos Ethos was built to make it faster and easier to get life insurance for the next million families. Our approach blends industry expertise, technology, and the human touch to find you the right policy to protect your loved ones. We leverage deep technology and data science to streamline the life insurance process, making it more accessible and convenient. Using predictive analytics, we are able to transform a traditionally multi-week process into a modern digital experience for our users that can take just minutes! We’ve issued billions in coverage each month and eliminated the traditional barriers, ushering the industry into the modern age. Our full-stack technology platform is the backbone of family financial health. We make getting life insurance easier, faster and better for everyone. Our investors include General Catalyst, Sequoia Capital, Accel Partners, Google Ventures, SoftBank, and the investment vehicles of Jay-Z, Kevin Durant, Robert Downey Jr and others. This year, we were named on CB Insights' Global Insurtech 50 list and BuiltIn's Top 100 Midsize Companies in San Francisco. We are scaling quickly and looking for passionate people to protect the next million families! About The Role We are seeking a passionate data scientist on our Risk Platform team. Your role will involve harnessing the power of data to optimize our risk assessment procedures, identifying actionable insights from countless data points, and ensuring our platform remains at the forefront of automated underwriting and fraud prevention. This position offers an opportunity to make a significant impact in a fast-growing startup and to introduce innovative solutions within the life insurance sector. Duties And Responsibilities Design, train, validate and deploy models to uncover hidden insights, optimize rule based systems Build predictive models for automated underwriting and fraud prevention Conduct thorough data analyses to identify patterns, trends and anomalies Collaborate closely with the data analytics team, engineer features, leverage domain knowledge, and partner with actuarial experts Work closely with product and engineering teams to embed machine learning models into production Regularly evaluate the performance of deployed models, ensuring they remain accurate and relevant Refine and recalibrate models based on changing data patterns and feedback loops Stay updated with the advancements in data science, risk modeling, AI, NLP Partner with leadership and product managers to shape the direction of our risk platform to provide data driven recommendations Clearly communicate intuition, concepts and potential impact to senior leadership Qualifications And Skills Master's or PhD in Computer Science, Data Science, or a related field 5+ years of hands-on experience in data science or machine learning. Bonus if this experience is in a medical or life insurance Deep understanding of various machine learning algorithms and NLP. Bonus if you have demonstrated expertise in deep learning Proven ability in designing, building and productionizing machine learning models in real world scenarios Strong expertise in Python and in machine learning libraries/frameworks such as TensorFlow, PyTorch, scikit-learn, pandas etc. Hands on experience with sagemaker and ability to independently deploy a model Exceptional ability to grasp domain specific nuances quickly. Bonus if there is demonstrated proficiency in applying machine learning to medical or life insurance domains Collaborative mindset, eagerness to learn and work with cross-functional teams Comfortable in a fast-paced startup environment Don’t meet every single requirement? If you’re excited about this role but your past experience doesn’t align perfectly with every qualification in the job description, we encourage you to apply anyway. At Ethos we are dedicated to building a diverse, inclusive and authentic workplace. We are an equal opportunity employer who values diversity and inclusion and look for applicants who understand, embrace and thrive in a multicultural world. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Pursuant to the SF Fair Chance Ordinance, we will consider employment for qualified applicants with arrests and conviction records. To learn more about what information we collect and how it may be used, please refer to our California Candidate Privacy Notice. Show more Show less

Posted 1 week ago

Apply

8.0 years

6 - 7 Lacs

Noida

On-site

Job Description: Key Responsibilities Hands-on Development: Develop and implement machine learning models and algorithms, including supervised, unsupervised, deep learning, and reinforcement learning techniques. Implement Generative AI solutions using technologies like RAG (Retrieval-Augmented Generation), Vector DBs, and frameworks such as LangChain and Hugging Face, Agentic Ai. Utilize popular AI/ML frameworks and libraries such as TensorFlow, PyTorch, and scikit-learn. Design and deploy NLP models and techniques, including text classification, RNNs, CNNs, and Transformer-based models like BERT. Ensure robust end-to-end AI/ML solutions, from data preprocessing and feature engineering to model deployment and monitoring. Technical Proficiency: Demonstrate strong programming skills in languages commonly used for data science and ML, particularly Python. Leverage cloud platforms and services for AI/ML, especially AWS, with knowledge of AWS Sagemaker, Lambda, DynamoDB, S3, and other AWS resources. Mentorship: Mentor and coach a team of data scientists and machine learning engineers, fostering skill development and professional growth. Provide technical guidance and support, helping team members overcome challenges and achieve project goals. Set technical direction and strategy for AI/ML projects, ensuring alignment with business goals and objectives. Facilitate knowledge sharing and collaboration within the team, promoting best practices and continuous learning. Strategic Advisory: Collaborate with cross-functional teams to integrate AI/ML solutions into business processes and products. Provide strategic insights and recommendations to support decision-making processes. Communicate effectively with stakeholders at various levels, including technical and non-technical audiences. Qualifications Bachelor’s degree in a relevant field (e.g., Computer Science) or equivalent combination of education and experience. Typically, 8-10 years of relevant work experience in AI/ML/GenAI 12+ years of overall work experience. With proven ability to manage projects and activities. Extensive experience with generative AI technologies, including RAG, Vector DBs, and frameworks such as LangChain and Hugging Face, Agentic AI Proficiency in machine learning algorithms and techniques, including supervised and unsupervised learning, deep learning, and reinforcement learning. Extensive experience with AI/ML frameworks and libraries such as TensorFlow, PyTorch, and scikit-learn. Strong knowledge of natural language processing (NLP) techniques and models, including Transformer-based models like BERT. Proficient programming skills in Python and experience with cloud platforms like AWS. Experience with AWS Cloud Resources, including AWS Sagemaker, Lambda, DynamoDB, S3, etc., is a plus. Proven experience leading a team of data scientists or machine learning engineers on complex projects. Strong project management skills, with the ability to prioritize tasks, allocate resources, and meet deadlines. Excellent communication skills and the ability to convey complex technical concepts to diverse audiences. Preferred Qualifications Experience in setting technical direction and strategy for AI/ML projects. Experience in the Insurance domain Ability to mentor and coach junior team members, fostering growth and development. Proven track record of successfully managing AI/ML projects from conception to deployment. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .

Posted 1 week ago

Apply

7.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

About Hakkoda Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. As an AWS Managed Services Architect, you will play a pivotal role in architecting and optimizing the infrastructure and operations of a complex Data Lake environment for BOT clients. You’ll leverage your strong expertise with AWS services to design, implement, and maintain scalable and secure data solutions while driving best practices. You will work collaboratively with delivery teams across the U.S., Costa Rica, Portugal, and other regions, ensuring a robust and seamless Data Lake architecture. In addition, you’llproactively engage with clients to support their evolving needs, oversee critical AWS infrastructure, and guide teams toward innovative and efficient solutions. This role demands a hands-on approach, including designing solutions, troubleshooting,optimizing performance, and maintaining operational excellence. Role Description AWS Data Lake Architecture: Design, build, and support scalable, high-performance architectures for complex AWS Data Lake solutions. AWS Services Expertise: Deploy and manage cloud-native solutions using a wide range of AWS services, including but not limited to- Amazon EMR (Elastic MapReduce): Optimize and maintain EMR clusters for large-scale big data processing. AWS Batch: Design and implement efficient workflows for batch processing workloads. Amazon SageMaker: Enable data science teams with scalable infrastructure for model training and deployment. AWS Glue: Develop ETL/ELT pipelines using Glue to ensure efficient data ingestion and transformation. AWS Lambda: Build serverless functions to automate processes and handle event-driven workloads. IAM Policies: Define and enforce fine-grained access controls to secure cloud resources and maintain governance. AWS IoT & Timestream: Design scalable solutions for collecting, storing, and analyzing time-series data. Amazon DynamoDB: Build and optimize high-performance NoSQL database solutions. Data Governance & Security: Implement best practices to ensure data privacy, compliance, and governance across the data architecture. Performance Optimization: Monitor, analyze, and tune AWS resources for performance efficiency and cost optimization. Develop and manage Infrastructure as Code (IaC) using AWS CloudFormation, Terraform, or equivalent tools to automate infrastructure deployment. Client Collaboration: Work closely with stakeholders to understand business objectives and ensure solutions align with client needs. Team Leadership & Mentorship: Provide technical guidance to delivery teams through design reviews, troubleshooting, and strategic planning. Continuous Innovation: Stay current with AWS service updates, industry trends, and emerging technologies to enhance solution delivery. Documentation & Knowledge Sharing: Create and maintain architecture diagrams, SOPs, and internal/external documentation to support ongoing operations and collaboration. Qualifications 7+ years of hands-on experience in cloud architecture and infrastructure (preferably AWS). 3+ years of experience specifically in architecting and managing Data Lake or big datadata solutions on AWS. Bachelor’s Degree in Computer Science, Information Systems, or a related field (preferred) AWS Certifications such as Solutions Architect Professional or Big Data Specialty. Experience with Snowflake, Matillion, or Fivetran in hybrid cloud environments. Familiarity with Azure or GCP cloud platforms. Understanding of machine learning pipelines and workflows. Technical Skills: Expertise in AWS services such as EMR, Batch, SageMaker, Glue, Lambda,IAM, IoT TimeStream, DynamoDB, and more. Strong programming skills in Python for scripting and automation. Proficiency in SQL and performance tuning for data pipelines and queries. Experience with IaC tools like Terraform or CloudFormation. Knowledge of big data frameworks such as Apache Spark, Hadoop, or similar. Data Governance & Security: Proven ability to design and implement secure solutions, with strong knowledge of IAM policies and compliance standards. Problem-Solving:Analytical and problem-solving mindset to resolve complex technical challenges. Collaboration:Exceptional communication skills to engage with technical and non-technicalstakeholders. Ability to lead cross-functional teams and provide mentorship. Benefits Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? 🚀 💻 Apply today👇 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here. Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

India

On-site

Linkedin logo

WhizzHR is hiring Media Solution Architect – AI/ML & Automation Focus Role Summary: We are seeking a Media Solution Architect to lead the strategic design of AI-driven and automation-centric solutions across digital media operations. This role involves architecting intelligent, scalable systems that enhance efficiency across campaign setup, trafficking, reporting, QA, and billing processes. The ideal candidate will bring a strong blend of automation, AI/ML, and digital marketing expertise to drive innovation and operational excellence. Key Responsibilities: Identify and assess opportunities to apply AI/ML and automation across media operations workflows (e.g., intelligent campaign setup, anomaly detection in QA, dynamic taxonomy validation). Design scalable, intelligent architectures using a combination of machine learning models, RPA, Python-based automation, and media APIs (e.g., Meta, DV360, YouTube). Develop or integrate machine learning models for use cases such as performance prediction, media mix modeling, and anomaly detection in reporting or billing. Ensure adherence to best practices in data governance, compliance, and security, particularly around AI system usage. Partner with business stakeholders to prioritize high-impact AI/automation use cases and define clear ROI and success metrics. Stay informed on emerging trends in AI/ML and translate innovations into actionable media solutions. Ideal Profile: 7+ years of experience in automation, AI/ML, or data science, including 3+ years in marketing, ad tech, or digital media. Strong understanding of machine learning frameworks for predictive modeling, anomaly detection, and NLP-based insight generation. Proficiency in Python and libraries such as scikit-learn, TensorFlow, pandas, or PyTorch. Experience with cloud-based AI platforms (e.g., Google Vertex AI, Azure ML, AWS Sagemaker) and media API integrations. Ability to architect AI-enhanced automations that improve forecasting, QA, and decision-making in media operations. Familiarity with RPA tools (e.g., UiPath, Automation Anywhere); AI-first automation experience is a plus. Demonstrated success in developing or deploying ML models for campaign optimization, fraud detection, or process intelligence. Familiarity with digital media ecosystems such as Google Ads, Meta, TikTok, DSPs, and ad servers. Excellent communication and stakeholder management skills, with the ability to translate technical solutions into business value. Kindly share your Resume at Hello@whizzhr.com Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

India

On-site

Linkedin logo

This role is for one of our clients Industry: Technology, Information and Media Seniority level: Associate level Min Experience: 2 years Location: India JobType: full-time About The Role We are looking for a proactive and skilled AWS Developer to join our dynamic team focused on cloud infrastructure and AI-driven solutions. In this role, you will architect, deploy, and maintain scalable and secure cloud environments on AWS, supporting the development and operationalization of machine learning models and AI applications. You will collaborate closely with data scientists, developers, and DevOps teams to ensure seamless integration and robust performance of AI workloads in the cloud. What You’ll Do Architect and build highly available, fault-tolerant, and scalable AWS infrastructure tailored for AI and machine learning workloads. Deploy, manage, and monitor AI/ML models in production using AWS services such as SageMaker, Lambda, EC2, ECS, and EKS. Partner with AI and ML teams to translate model requirements into effective cloud architectures and operational workflows. Automate infrastructure deployment and management through Infrastructure as Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and optimize CI/CD pipelines to streamline model training, validation, and deployment processes. Monitor cloud environments and AI workloads proactively to identify and resolve performance bottlenecks or security vulnerabilities. Enforce best practices for data security, compliance, and governance in handling AI datasets and inference endpoints. Stay updated with AWS advancements and emerging tools to continuously enhance AI infrastructure capabilities. Support troubleshooting efforts, perform root cause analysis, and document solutions to maintain high system reliability. Who You Are 2+ years of hands-on experience working with AWS cloud services, especially in deploying and managing AI/ML workloads. Strong knowledge of AWS core services including S3, EC2, Lambda, SageMaker, IAM, CloudWatch, ECR, ECS, EKS, and CloudFormation. Experience deploying machine learning models into production environments and maintaining their lifecycle. Proficient in scripting and programming languages such as Python, Bash, or Node.js for automation and orchestration tasks. Skilled with containerization and orchestration tools such as Docker and Kubernetes (EKS). Familiar with monitoring and alerting solutions like AWS CloudWatch, Prometheus, or Grafana. Understanding of CI/CD methodologies and tools like Jenkins, GitHub Actions, or AWS CodePipeline. Bachelor’s degree in Computer Science, Engineering, or a related technical discipline. Bonus Points For AWS certifications such as AWS Certified Machine Learning – Specialty or AWS Solutions Architect. Hands-on experience with MLOps frameworks (Kubeflow, MLflow) and model version control. Familiarity with big data processing tools like Apache Spark, AWS Glue, or Redshift. Experience working in Agile or Scrum-based development environments. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Our Company Teradata is the connected multi-cloud data platform for enterprise analytics company. Our enterprise analytics solve business challenges from start to scale. Only Teradata gives you the flexibility to handle the massive and mixed data workloads of the future, today. The Teradata Vantage architecture is cloud native, delivered as-a-service, and built on an open ecosystem. These design features make Vantage the ideal platform to optimize price performance in a multi-cloud environment. What You’ll Do The Principal Data Scientist (pre-sales) is an experienced and expert Data Scientist, able to provide industry thought-leadership on Analytics and its application across industries and across use-cases. The Principal Data Scientist supports the account team in framing business problems and in identifying analytic solutions that leverage Teradata technology and that are disruptive, innovative - and above all, practical. An articulate and compelling communicator, the Principal Data Scientist establishes our position as an important partner for advanced analytics with customers and prospects and is a trusted advisor to executives, senior managers and fellow data scientists alike across a range of target accounts. They are also a hands-on practitioner who is ready, willing and able to roll-up her sleeves and to deliver POC and short-term pre-sales engagements. The Principal Data Scientist has an excellent theoretical and practical understanding of statistics and machine learning and has a strong track record of applying this understanding at scale to drive business benefit. They are insanely curious and is a natural problem-solver and able to effectively promote Teradata technology and solutions to our customers. Who You’ll Work With The successful candidate will work with other expert team members to Provide pre-sales support at an executive level to the Teradata account teams at a local country, Geo and an International Theatre level. Helping them to position and sell complex Analytic solutions that drive sales of Teradata software. Provide strategic pre-sales consulting to executives and senior managers in our target market. Support the delivery of PoC and PoV projects that demonstrate the viability and applicability of Analytic use-cases and the superiority of Teradata solutions and services. Work with the extended Account team, and Sales Analytics Specialists to develop new Analytic propositions that are aligned with industry trends and customer requirements. What Makes You a Qualified Candidate Have proven hands-on experience of complex analytics at scale for example in the areas of IoT and sensor data. Have experience with Teradata partner’s analytical products, Cloud Service providers such as AzureML and Sagemaker and partner products such as Dataiku and H2O Have strong hands-on programming skills in at least one major analytic programming language and/or tool in addition to SQL Strong understanding of data engineering and database systems. Recognised in the local country, geo and International Theatre as the go-to expert What You’ll Bring An expertise in Data Science with a strong theoretical grounding in statistics, advanced analytics, and machine learning and at least 10 years real-world experience in the application of advanced analytics. A passion about knowledge sharing and demonstrate a commitment to continuous professional development. A Belief in Teradata's Analytic solutions and services and be a commitment to working with the product, engineering, and consulting teams to ensure that they continue to lead the market An ability to turn complex technical subject matter into relatable easy to digest and understand content for senior audiences. a degree level qualification (preferably Masters or PHD) in Statistics, Data Science, the physical or biological sciences or a related discipline Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. ​ We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

India

Remote

Linkedin logo

Job description Job description This is a permanent work from home position from anywhere in India Notice period - less than 30 Days (Immediate joiners preferred) We are seeking a Generative AI Engineer with 5 + years of experience in machine learning, deep learning, and large language models (LLMs) . The ideal candidate will lead the design, development, and deployment of AI-driven solutions for text, image, and speech generation using cutting-edge GenAI frameworks and cloud platforms . Key Responsibilities: Develop and fine-tune Generative AI models (LLMs, GANs, Diffusion Models, VAEs). Implement NLP, computer vision, and speech-based AI applications . Optimize model performance, scalability, and efficiency for production use. Work with transformer architectures (GPT, BERT, T5, LLaMA, etc.). Deploy AI models on AWS, Azure, or GCP using MLOps and containerization. Design LLM-based applications using LangChain, vector databases, and prompt engineering . Collaborate with cross-functional teams to integrate AI solutions into enterprise applications . Stay ahead of AI/ML trends and advancements to drive innovation. Required Skills: GenAI Frameworks TensorFlow, PyTorch, Hugging Face, OpenAI API LLM Fine-tuning, RAG (Retrieval-Augmented Generation), Prompt Engineering Cloud AI Services AWS SageMaker, Azure OpenAI, Google Vertex AI Programming & Data Engineering Python, PyTorch, LangChain, SQL, NoSQL MLOps & Deployment – Docker, Kubernetes, CI/CD, Vector Databases (FAISS, Pinecone) Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Maharashtra, India

On-site

Linkedin logo

Our Company Teradata is the connected multi-cloud data platform for enterprise analytics company. Our enterprise analytics solve business challenges from start to scale. Only Teradata gives you the flexibility to handle the massive and mixed data workloads of the future, today. The Teradata Vantage architecture is cloud native, delivered as-a-service, and built on an open ecosystem. These design features make Vantage the ideal platform to optimize price performance in a multi-cloud environment. What You’ll Do The Principal Data Scientist (pre-sales) is an experienced and expert Data Scientist, able to provide industry thought-leadership on Analytics and its application across industries and across use-cases. The Principal Data Scientist supports the account team in framing business problems and in identifying analytic solutions that leverage Teradata technology and that are disruptive, innovative - and above all, practical. An articulate and compelling communicator, the Principal Data Scientist establishes our position as an important partner for advanced analytics with customers and prospects and is a trusted advisor to executives, senior managers and fellow data scientists alike across a range of target accounts. They are also a hands-on practitioner who is ready, willing and able to roll-up her sleeves and to deliver POC and short-term pre-sales engagements. The Principal Data Scientist has an excellent theoretical and practical understanding of statistics and machine learning and has a strong track record of applying this understanding at scale to drive business benefit. They are insanely curious and is a natural problem-solver and able to effectively promote Teradata technology and solutions to our customers. Who You’ll Work With The successful candidate will work with other expert team members to Provide pre-sales support at an executive level to the Teradata account teams at a local country, Geo and an International Theatre level. Helping them to position and sell complex Analytic solutions that drive sales of Teradata software. Provide strategic pre-sales consulting to executives and senior managers in our target market. Support the delivery of PoC and PoV projects that demonstrate the viability and applicability of Analytic use-cases and the superiority of Teradata solutions and services. Work with the extended Account team, and Sales Analytics Specialists to develop new Analytic propositions that are aligned with industry trends and customer requirements. What Makes You a Qualified Candidate Have proven hands-on experience of complex analytics at scale for example in the areas of IoT and sensor data. Have experience with Teradata partner’s analytical products, Cloud Service providers such as AzureML and Sagemaker and partner products such as Dataiku and H2O Have strong hands-on programming skills in at least one major analytic programming language and/or tool in addition to SQL Strong understanding of data engineering and database systems. Recognised in the local country, geo and International Theatre as the go-to expert What You’ll Bring An expertise in Data Science with a strong theoretical grounding in statistics, advanced analytics, and machine learning and at least 10 years real-world experience in the application of advanced analytics. A passion about knowledge sharing and demonstrate a commitment to continuous professional development. A Belief in Teradata's Analytic solutions and services and be a commitment to working with the product, engineering, and consulting teams to ensure that they continue to lead the market An ability to turn complex technical subject matter into relatable easy to digest and understand content for senior audiences. a degree level qualification (preferably Masters or PHD) in Statistics, Data Science, the physical or biological sciences or a related discipline Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. ​ We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Linkedin logo

Our Company Teradata is the connected multi-cloud data platform for enterprise analytics company. Our enterprise analytics solve business challenges from start to scale. Only Teradata gives you the flexibility to handle the massive and mixed data workloads of the future, today. The Teradata Vantage architecture is cloud native, delivered as-a-service, and built on an open ecosystem. These design features make Vantage the ideal platform to optimize price performance in a multi-cloud environment. What You’ll Do The Principal Data Scientist (pre-sales) is an experienced and expert Data Scientist, able to provide industry thought-leadership on Analytics and its application across industries and across use-cases. The Principal Data Scientist supports the account team in framing business problems and in identifying analytic solutions that leverage Teradata technology and that are disruptive, innovative - and above all, practical. An articulate and compelling communicator, the Principal Data Scientist establishes our position as an important partner for advanced analytics with customers and prospects and is a trusted advisor to executives, senior managers and fellow data scientists alike across a range of target accounts. They are also a hands-on practitioner who is ready, willing and able to roll-up her sleeves and to deliver POC and short-term pre-sales engagements. The Principal Data Scientist has an excellent theoretical and practical understanding of statistics and machine learning and has a strong track record of applying this understanding at scale to drive business benefit. They are insanely curious and is a natural problem-solver and able to effectively promote Teradata technology and solutions to our customers. Who You’ll Work With The successful candidate will work with other expert team members to Provide pre-sales support at an executive level to the Teradata account teams at a local country, Geo and an International Theatre level. Helping them to position and sell complex Analytic solutions that drive sales of Teradata software. Provide strategic pre-sales consulting to executives and senior managers in our target market. Support the delivery of PoC and PoV projects that demonstrate the viability and applicability of Analytic use-cases and the superiority of Teradata solutions and services. Work with the extended Account team, and Sales Analytics Specialists to develop new Analytic propositions that are aligned with industry trends and customer requirements. What Makes You a Qualified Candidate Have proven hands-on experience of complex analytics at scale for example in the areas of IoT and sensor data. Have experience with Teradata partner’s analytical products, Cloud Service providers such as AzureML and Sagemaker and partner products such as Dataiku and H2O Have strong hands-on programming skills in at least one major analytic programming language and/or tool in addition to SQL Strong understanding of data engineering and database systems. Recognised in the local country, geo and International Theatre as the go-to expert What You’ll Bring An expertise in Data Science with a strong theoretical grounding in statistics, advanced analytics, and machine learning and at least 10 years real-world experience in the application of advanced analytics. A passion about knowledge sharing and demonstrate a commitment to continuous professional development. A Belief in Teradata's Analytic solutions and services and be a commitment to working with the product, engineering, and consulting teams to ensure that they continue to lead the market An ability to turn complex technical subject matter into relatable easy to digest and understand content for senior audiences. a degree level qualification (preferably Masters or PHD) in Statistics, Data Science, the physical or biological sciences or a related discipline Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. ​ We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

● Minimum of (4+) years of experience in AI-based application development. ● Fine-tune pre-existing models to improve performance and accuracy. ● Experience with TensorFlow or PyTorch, Scikit-learn, or similar ML frameworks and familiarity with APIs like OpenAI or vertex AI ● Experience with NLP tools and libraries (e.g., NLTK, SpaCy, GPT, BERT). ● Implement frameworks like LangChain, Anthropics Constitutional AI, OpenAIs, Hugging Face, and Prompt Engineering techniques to build robust and scalable AI applications. ● Evaluate and analyze RAG solution and Utilise the best-in-class LLM to define customer experience solutions (Fine tune Large Language models (LLM)). ● Architect and develop advanced generative AI solutions leveraging state-of-the-art language models (LLMs) such as GPT, LLaMA, PaLM, BLOOM, and others. ● Strong understanding and experience with open-source multimodal LLM models to customize and create solutions. ● Explore and implement cutting-edge techniques like Few-Shot Learning, Reinforcement Learning, Multi-Task Learning, and Transfer Learning for AI model training and fine-tuning. ● Proficiency in data preprocessing, feature engineering, and data visualization using tools like Pandas, NumPy, and Matplotlib. ● Optimize model performance through experimentation, hyperparameter tuning, and advanced optimization techniques. ● Proficiency in Python with the ability to get hands-on with coding at a deep level. ● Develop and maintain APIs using Python's FastAPI, Flask, or Django for integrating AI capabilities into various systems. ● Ability to write optimized and high-performing scripts on relational databases (e.g., MySQL, PostgreSQL) or non-relational database (e.g., MongoDB or Cassandra) ● Enthusiasm for continuous learning and professional developement in AI and leated technologies. ● Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. ● Knowledge of cloud services like AWS, Google Cloud, or Azure. ● Proficiency with version control systems, especially Git. ● Familiarity with data pre-processing techniques and pipeline development for Al model training. ● Experience with deploying models using Docker, Kubernetes ● Experience with AWS Bedrock, and Sagemaker is a plus ● Strong problem-solving skills with the ability to translate complex business problems into Al solutions. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

ProArch is seeking an experienced AWS Data Engineer to join our team. As an AWS Data Engineer, you will be responsible for designing, building, and maintaining data solutions on the AWS platform. Job Description: Must to Have Skills : AWS - Data Engineer - PySpark, Glue, S3, Athena To work in the capacity of AWS Cloud developer Scripting/programming in Python/Pyspark Design/Develop solutions as per the specification Able to translate functional and technical requirements into detailed design Work with partners for regular updates, requirement understanding and design discussions AWS Cloud platform services stack - S3, EC2, EMR, Lambda, RDS, DynamoDB, Kinesis, Sagemaker, Athena etc SQL Knowledge Exposure to Data warehousing concepts like - Data Warehouse, Data Lake, Dimensions etc Good communication skills are must Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are giving preference to candidates who are available to join immediately or within the month of June. Machine Learning Engineer (Python, AWS) We are seeking an experienced Machine Learning Engineer with 5+ years of hands-on experience in developing and deploying ML solutions. The ideal candidate will have strong Python programming skills and a proven track record working with AWS services for machine learning. Responsibilities: Design, develop, and deploy scalable machine learning models. Implement and optimize ML algorithms using Python. Leverage AWS services (e.g., Sagemaker, EC2, S3, Lambda) for ML model training, deployment, and monitoring. Collaborate with data scientists and other engineers to bring ML solutions to production. Ensure the performance, reliability, and scalability of ML systems. Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5+ years of professional experience as a Machine Learning Engineer. Expertise in Python programming for machine learning. Strong experience with AWS services for ML (SageMaker, EC2, S3, Lambda, etc.). Solid understanding of machine learning algorithms and principles. Experience with MLOps practices is a plus. Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Scientist Location: Remote Job Type: Full-Time | Permanent Experience Required: 4+ Years About the Role: We are looking for a highly motivated and analytical Data Scientist with 4 years of industry experience to join our data team. The ideal candidate will have a strong background in Python , SQL , and experience deploying machine learning models using AWS SageMaker . You will be responsible for solving complex business problems with data-driven solutions, developing models, and helping scale machine learning systems into production environments. Key Responsibilities: Model Development: Design, develop, and validate machine learning models for classification, regression, and clustering tasks. Work with structured and unstructured data to extract actionable insights and drive business outcomes. Deployment & MLOps: Deploy machine learning models using AWS SageMaker , including model training, tuning, hosting, and monitoring. Build reusable pipelines for model deployment, automation, and performance tracking. Data Exploration & Feature Engineering: Perform data wrangling, preprocessing, and feature engineering using Python and SQL . Conduct EDA (exploratory data analysis) to identify patterns and anomalies. Collaboration: Work closely with data engineers, product managers, and business stakeholders to define data problems and deliver scalable solutions. Present model results and insights to both technical and non-technical audiences. Continuous Improvement: Stay updated on the latest advancements in machine learning, AI, and cloud technologies. Suggest and implement best practices for experimentation, model governance, and documentation. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or related field. 4+ years of hands-on experience in data science, machine learning, or applied AI roles. Proficiency in Python for data analysis, model development, and scripting. Strong SQL skills for querying and manipulating large datasets. Hands-on experience with AWS SageMaker , including model training, deployment, and monitoring. Solid understanding of machine learning algorithms and techniques (supervised/unsupervised). Familiarity with libraries such as Pandas, NumPy, Scikit-learn, Matplotlib, and Seaborn. Preferred Qualifications (Nice to Have): Experience with MLOps tools (e.g., MLflow, SageMaker Pipelines). Exposure to deep learning frameworks like TensorFlow or PyTorch. Knowledge of AWS data ecosystem (e.g., S3, Redshift, Athena). Experience in A/B testing or statistical experimentation Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About Company Our client is a trusted global innovator of IT and business services. We help clients transform through consulting, industry solutions, business process services, digital & IT modernization and managed services. Our client enables them, as well as society, to move confidently into the digital future. We are committed to our clients’ long-term success and combine global reach with local client attention to serve them in over 50 countries around the globe Job Title: Senior AI Cloud Operations Engineer Location: Chennai Experience: 4 to 5 yrs Job Type : Contract to hire Notice Period:- Immediate joiner OffShore Profile Summary: We’re looking for a Senior AI Cloud Operations Engineer to start building a new for AI Cloud Operations team, starting with this strategic position. We are searching for an experienced Senior AI Cloud Operations Engineer with deep expertise in AI technologies to lead our cloud-based AI infrastructure management. This role is integral to ensuring our AI systems' scalability, reliability, and performance, enabling us to deliver cutting-edge solutions. The ideal candidate will have a robust understanding of machine learning frameworks, cloud services architecture, and operations management. Key Responsibilities: Cloud Architecture Design: Design, architect, and manage scalable cloud infrastructure tailored for AI workloads, leveraging platforms like AWS, Azure, or Google Cloud. System Monitoring and Optimization: Implement comprehensive monitoring solutions to ensure high availability and swift performance, utilizing tools like Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: Work closely with data scientists to operationalize AI models, ensuring seamless integration with existing systems and workflows. Familiarity with tools such as MLflow or TensorFlow Serving can be beneficial. Automation and Orchestration: Develop automated deployment pipelines using orchestration tools like Kubernetes and Terraform to streamline operations and reduce manual interventions. Security and Compliance: Ensure that all cloud operations adhere to security best practices and compliance standards, including data privacy regulations like GDPR or HIPAA. Documentation and Reporting: Create and maintain detailed documentation of cloud configurations, procedures, and operational metrics to foster transparency and continuous improvement. Performance Tuning: Conduct regular performance assessments and implement strategies to optimize cloud resource utilization and reduce costs without compromising system effectiveness. Issue Resolution: Rapidly identify, diagnose, and resolve technical issues, minimizing downtime and ensuring maximum uptime. Qualifications: Educational Background: Bachelor’s degree in Computer Science, Engineering, or a related field. Master's degree preferred. Professional Experience: 5+ years of extensive experience in cloud operations, particularly within AI environments. Demonstrated expertise in deploying and managing complex AI systems in cloud settings. Technical Expertise: Deep knowledge of cloud platforms (AWS, Azure, Google Cloud) including their AI-specific services such as AWS SageMaker or Google AI Platform. AI/ML Proficiency: In-depth understanding of AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, along with experience in ML model lifecycle management. Infrastructure as Code: Proficiency in infrastructure-as-code tools such as Terraform and AWS CloudFormation to automate and manage cloud deployment processes. Containerization and Microservices: Expertise in managing containerized applications using Docker and orchestrating services with Kubernetes. Soft Skills: Strong analytical, problem-solving, and communication skills, with the ability to work effectively both independently and in collaboration with cross-functional teams. Preferred Qualifications: Advanced certifications in cloud services, such as AWS Certified Solutions Architect or Google Cloud Professional Data Engineer. Experience in advanced AI techniques such as deep learning or reinforcement learning. Knowledge of emerging AI technologies and trends to drive innovation within existing infrastructure. List of Used Tools: Cloud Provider: Azure, AWS or Google. Performance & monitor: Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: MLflow or TensorFlow Serving Automation and Orchestration: Kubernetes and Terraform Security and Compliance: Data privacy regulations like GDPR or HIPAA. Qualifications Bachelor's degree in Computer Science (or related field) Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

Senior Cloud Engineer - AWS Hyderabad, India; Gurgaon, India Information Technology 315801 Job Description About The Role: Grade Level (for internal use): 10 S&P Global Commodity Insights The Role: Senior Cloud Engineer The Location: Hyderabad, Gurgaon The Team: The Cloud Engineering Team is responsible for designing, implementing, and maintaining cloud infrastructure that supports various applications and services within the S&P Global Commodity Insights organization. This team collaborates closely with data science, application development, and security teams to ensure the reliability, security, and scalability of our cloud solutions. The Impact: As a Cloud Engineer, you will play a vital role in deploying and managing cloud infrastructure that supports our strategic initiatives. Your expertise in AWS and cloud technologies will help streamline operations, enhance service delivery, and ensure the security and compliance of our environments. What’s in it for you: This position offers the opportunity to work on cutting-edge cloud technologies and collaborate with various teams across the organization. You will gain exposure to multiple S&P Commodity Insights Divisions and contribute to projects that have a significant impact on the business. This role opens doors for tremendous career opportunities within S&P Global. Responsibilities: Design and deploy cloud infrastructure using core AWS services such as EC2, S3, RDS, IAM, VPC, and CloudFront, ensuring high availability and fault tolerance. Deploy, manage, and scale Kubernetes clusters using Amazon EKS, ensuring high availability, secure networking, and efficient resource utilization. Develop secure, compliant AWS environments by configuring IAM roles/policies, KMS encryption, security groups, and VPC endpoints. Configure logging, monitoring, and alerting with CloudWatch, CloudTrail, and GuardDuty to support observability and incident response. Enforce security and compliance controls via IAM policy audits, patching schedules, and automated backup strategies. Monitor infrastructure health, respond to incidents, and maintain SLAs through proactive alerting and runbook execution. Collaborate with data science teams to deploy machine learning models using Amazon SageMaker, managing model training, hosting, and monitoring. Automate and schedule data processing workflows using AWS Glue, Step Functions, Lambda, and EventBridge to support ML pipelines. Optimize infrastructure for cost and performance using AWS Compute Optimizer, CloudWatch metrics, auto-scaling, and Reserved Instances/Savings Plans. Write and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation for repeatable, automated infrastructure deployments. Implement disaster recovery, backups, and versioned deployments using S3 versioning, RDS snapshots, and CloudFormation change sets. Set up and manage CI/CD pipelines using AWS services like CodePipeline, CodeBuild, and CodeDeploy to support application and model deployments. Manage and optimize real-time inference pipelines using SageMaker Endpoints, Amazon Bedrock, and Lambda with API Gateway to ensure reliable, scalable model serving. Support containerized AI workloads using Amazon ECS or EKS, including model serving and microservices for AI-based features. Collaborate with SecOps and SRE teams to uphold security baselines, manage change control, and conduct root cause analysis for outages. Participate in code reviews, design discussions, and architectural planning to ensure scalable and maintainable cloud infrastructure. Maintain accurate and up-to-date infrastructure documentation, including architecture diagrams, access control policies, and deployment processes. Collaborate cross-functionally with application, data, and security teams to align cloud solutions with business and technical goals. Stay current with AWS and AI/ML advancements, suggesting improvements or new service adoption where applicable. What We’re Looking For: Strong understanding of cloud infrastructure, particularly AWS services and Kubernetes. Proven experience in deploying and managing cloud solutions in a collaborative Agile environment. Ability to present technical concepts to both business and technical audiences. Excellent multi-tasking skills and the ability to manage multiple projects under tight deadlines. Basic Qualifications: BA/BS in computer science, information technology, or a related field. 5+ years of experience in cloud engineering or related roles, specifically with AWS. Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Knowledge of container orchestration and microservices architecture. Familiarity with security best practices in cloud environments. Preferred Qualifications: Extensive Hands-on Experience with AWS Services. Excellent problem-solving skills and the ability to work independently as well as part of a team. Strong communication skills and the ability to influence stakeholders at all levels. Experience with greenfield projects and building cloud infrastructure from scratch. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315801 Posted On: 2025-06-05 Location: Hyderabad, Telangana, India

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana

On-site

Indeed logo

About the Role: Grade Level (for internal use): 10 S&P Global Commodity Insights The Role: Senior Cloud Engineer The Location: Hyderabad, Gurgaon The Team: The Cloud Engineering Team is responsible for designing, implementing, and maintaining cloud infrastructure that supports various applications and services within the S&P Global Commodity Insights organization. This team collaborates closely with data science, application development, and security teams to ensure the reliability, security, and scalability of our cloud solutions. The Impact: As a Cloud Engineer, you will play a vital role in deploying and managing cloud infrastructure that supports our strategic initiatives. Your expertise in AWS and cloud technologies will help streamline operations, enhance service delivery, and ensure the security and compliance of our environments. What’s in it for you: This position offers the opportunity to work on cutting-edge cloud technologies and collaborate with various teams across the organization. You will gain exposure to multiple S&P Commodity Insights Divisions and contribute to projects that have a significant impact on the business. This role opens doors for tremendous career opportunities within S&P Global. Responsibilities: Design and deploy cloud infrastructure using core AWS services such as EC2, S3, RDS, IAM, VPC, and CloudFront, ensuring high availability and fault tolerance. Deploy, manage, and scale Kubernetes clusters using Amazon EKS, ensuring high availability, secure networking, and efficient resource utilization. Develop secure, compliant AWS environments by configuring IAM roles/policies, KMS encryption, security groups, and VPC endpoints. Configure logging, monitoring, and alerting with CloudWatch, CloudTrail, and GuardDuty to support observability and incident response. Enforce security and compliance controls via IAM policy audits, patching schedules, and automated backup strategies. Monitor infrastructure health, respond to incidents, and maintain SLAs through proactive alerting and runbook execution. Collaborate with data science teams to deploy machine learning models using Amazon SageMaker, managing model training, hosting, and monitoring. Automate and schedule data processing workflows using AWS Glue, Step Functions, Lambda, and EventBridge to support ML pipelines. Optimize infrastructure for cost and performance using AWS Compute Optimizer, CloudWatch metrics, auto-scaling, and Reserved Instances/Savings Plans. Write and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation for repeatable, automated infrastructure deployments. Implement disaster recovery, backups, and versioned deployments using S3 versioning, RDS snapshots, and CloudFormation change sets. Set up and manage CI/CD pipelines using AWS services like CodePipeline, CodeBuild, and CodeDeploy to support application and model deployments. Manage and optimize real-time inference pipelines using SageMaker Endpoints, Amazon Bedrock, and Lambda with API Gateway to ensure reliable, scalable model serving. Support containerized AI workloads using Amazon ECS or EKS, including model serving and microservices for AI-based features. Collaborate with SecOps and SRE teams to uphold security baselines, manage change control, and conduct root cause analysis for outages. Participate in code reviews, design discussions, and architectural planning to ensure scalable and maintainable cloud infrastructure. Maintain accurate and up-to-date infrastructure documentation, including architecture diagrams, access control policies, and deployment processes. Collaborate cross-functionally with application, data, and security teams to align cloud solutions with business and technical goals. Stay current with AWS and AI/ML advancements, suggesting improvements or new service adoption where applicable. What We’re Looking For: Strong understanding of cloud infrastructure, particularly AWS services and Kubernetes. Proven experience in deploying and managing cloud solutions in a collaborative Agile environment. Ability to present technical concepts to both business and technical audiences. Excellent multi-tasking skills and the ability to manage multiple projects under tight deadlines. Basic Qualifications: BA/BS in computer science, information technology, or a related field. 5+ years of experience in cloud engineering or related roles, specifically with AWS. Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Knowledge of container orchestration and microservices architecture. Familiarity with security best practices in cloud environments. Preferred Qualifications: Extensive Hands-on Experience with AWS Services. Excellent problem-solving skills and the ability to work independently as well as part of a team. Strong communication skills and the ability to influence stakeholders at all levels. Experience with greenfield projects and building cloud infrastructure from scratch. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315801 Posted On: 2025-06-05 Location: Hyderabad, Telangana, India

Posted 1 week ago

Apply

Exploring Sagemaker Jobs in India

Sagemaker is a rapidly growing field in India, with many companies looking to hire professionals with expertise in this area. Whether you are a seasoned professional or a newcomer to the tech industry, there are plenty of opportunities waiting for you in the sagemaker job market.

Top Hiring Locations in India

If you are looking to land a sagemaker job in India, here are the top 5 cities where companies are actively hiring for roles in this field:

  • Bangalore
  • Hyderabad
  • Pune
  • Mumbai
  • Chennai

Average Salary Range

The salary range for sagemaker professionals in India can vary based on experience and location. On average, entry-level professionals can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

In the sagemaker field, a typical career progression may look like this:

  • Junior Sagemaker Developer
  • Sagemaker Developer
  • Senior Sagemaker Developer
  • Sagemaker Tech Lead

Related Skills

In addition to expertise in sagemaker, professionals in this field are often expected to have knowledge of the following skills:

  • Machine Learning
  • Data Science
  • Python programming
  • Cloud computing (AWS)
  • Deep learning

Interview Questions

Here are 25 interview questions that you may encounter when applying for sagemaker roles, categorized by difficulty level:

  • Basic:
  • What is Amazon SageMaker?
  • How does SageMaker differ from traditional machine learning?
  • What is a SageMaker notebook instance?

  • Medium:

  • How do you deploy a model in SageMaker?
  • Can you explain the process of hyperparameter tuning in SageMaker?
  • What is the difference between SageMaker Ground Truth and SageMaker Processing?

  • Advanced:

  • How would you handle model drift in a SageMaker deployment?
  • Can you compare SageMaker with other machine learning platforms in terms of scalability and flexibility?
  • How do you optimize a SageMaker model for cost efficiency?

Closing Remark

As you explore opportunities in the sagemaker job market in India, remember to hone your skills, stay updated with industry trends, and approach interviews with confidence. With the right preparation and mindset, you can land your dream job in this exciting and evolving field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies