Jobs
Interviews

1576 Sagemaker Jobs - Page 50

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 3.0 years

3 - 5 Lacs

Hyderabad

Work from Office

What you will do In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and performing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has deep technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a crucial team member that assists in design and development of the data pipeline Build data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast-paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation Basic Qualifications: Masters degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: Must-Have Skills: Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Solid understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Good understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Professional Certifications Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments) Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills

Posted 2 months ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Description The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About The Team About AWS: Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications Experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Proven track record in designing and developing end-to-end Machine Learning and Generative AI solutions, from conception to deployment Experience in applying best practices and evaluating alternative and complementary ML and foundational models suitable for given business contexts Foundational knowledge of data modeling principles, statistical analysis methodologies, and demonstrated ability to extract meaningful insights from complex, large-scale datasets Preferred Qualifications AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in developing and optimizing foundation models (LLMs), including fine-tuning, continuous training, small language model development, and implementation of Agentic AI systems Experience in developing and deploying end-to-end machine learning and deep learning solutions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS Proserve IN – Haryana Job ID: A2943450 Show more Show less

Posted 2 months ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience Position: Senior Associate Industry: Telecom / Network Analytics / Customer Analytics Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Proven experience with telco data including call detail records (CDRs), customer churn models, and network analytics Deep understanding of predictive modeling for customer lifetime value and usage behavior Experience working with telco clients or telco data platforms (like Amdocs, Ericsson, Nokia, AT&T etc) Proficiency in machine learning techniques, including classification, regression, clustering, and time-series forecasting Strong command of statistical techniques (e.g., logistic regression, hypothesis testing, segmentation models) Strong programming in Python or R, and SQL with telco-focused data wrangling Exposure to big data technologies used in telco environments (e.g., Hadoop, Spark) Experience working in the telecom industry across domains such as customer churn prediction, ARPU modeling, pricing optimization, and network performance analytics Strong communication skills to interface with technical and business teams Nice To Have Exposure to cloud platforms (Azure ML, AWS SageMaker, GCP Vertex AI) Experience working with telecom OSS/BSS systems or customer segmentation tools Familiarity with network performance analytics, anomaly detection, or real-time data processing Strong client communication and presentation skills Roles And Responsibilities Assist analytics projects within the telecom domain, driving design, development, and delivery of data science solutions Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity like data quality, model robustness, and explainability for deployments. Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less

Posted 2 months ago

Apply

2.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Requirements Role/Job Title: Developer Function/Department: Information technology Job Purpose As a Backend Developer, you will play a crucial role in designing, developing, and maintaining complex backend systems. You will work closely with cross-functional teams to deliver high-quality software solutions and drive the technical direction of our projects. Your experience and expertise will be vital in ensuring the performance, scalability, and reliability of our applications. Roles and Responsibilities: Solid understanding of backend performance optimization and debugging. Formal training or certification on software engineering concepts and proficient applied experience Strong hands-on experience with Python Experience in developing microservices using Python with FastAPI. Commercial experience in both backend and frontend engineering Hands-on experience with AWS Cloud-based applications development, including EC2, ECS, EKS, Lambda, SQS, SNS, RDS Aurora MySQL & Postgres, DynamoDB, EMR, and Kinesis. Strong engineering background in machine learning, deep learning, and neural networks. Experience with containerized stack using Kubernetes or ECS for development, deployment, and configuration. Experience with Single Sign-On/OIDC integration and a deep understanding of OAuth, JWT/JWE/JWS. Knowledge of AWS SageMaker and data analytics tools. Proficiency in frameworks TensorFlow, PyTorch, or similar. Educational Qualification (Fulltime) Bachelor of Technology (B.Tech) / Bachelor of Science (B.Sc) / Master of Science (M.Sc) /Master of Technology (M.Tech) / Bachelor of Computer Applications (BCA) / Master of Computer Applications (MCA) Experience : 2-5 Years Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About Ethos Ethos was built to make it faster and easier to get life insurance for the next million families. Our approach blends industry expertise, technology, and the human touch to find you the right policy to protect your loved ones. We leverage deep technology and data science to streamline the life insurance process, making it more accessible and convenient. Using predictive analytics, we are able to transform a traditionally multi-week process into a modern digital experience for our users that can take just minutes! We’ve issued billions in coverage each month and eliminated the traditional barriers, ushering the industry into the modern age. Our full-stack technology platform is the backbone of family financial health. We make getting life insurance easier, faster and better for everyone. Our investors include General Catalyst, Sequoia Capital, Accel Partners, Google Ventures, SoftBank, and the investment vehicles of Jay-Z, Kevin Durant, Robert Downey Jr and others. This year, we were named on CB Insights' Global Insurtech 50 list and BuiltIn's Top 100 Midsize Companies in San Francisco. We are scaling quickly and looking for passionate people to protect the next million families! About The Role We are seeking a passionate data scientist on our Risk Platform team. Your role will involve harnessing the power of data to optimize our risk assessment procedures, identifying actionable insights from countless data points, and ensuring our platform remains at the forefront of automated underwriting and fraud prevention. This position offers an opportunity to make a significant impact in a fast-growing startup and to introduce innovative solutions within the life insurance sector. Duties And Responsibilities Design, train, validate and deploy models to uncover hidden insights, optimize rule based systems Build predictive models for automated underwriting and fraud prevention Conduct thorough data analyses to identify patterns, trends and anomalies Collaborate closely with the data analytics team, engineer features, leverage domain knowledge, and partner with actuarial experts Work closely with product and engineering teams to embed machine learning models into production Regularly evaluate the performance of deployed models, ensuring they remain accurate and relevant Refine and recalibrate models based on changing data patterns and feedback loops Stay updated with the advancements in data science, risk modeling, AI, NLP Partner with leadership and product managers to shape the direction of our risk platform to provide data driven recommendations Clearly communicate intuition, concepts and potential impact to senior leadership Qualifications And Skills Master's or PhD in Computer Science, Data Science, or a related field 5+ years of hands-on experience in data science or machine learning. Bonus if this experience is in a medical or life insurance Deep understanding of various machine learning algorithms and NLP. Bonus if you have demonstrated expertise in deep learning Proven ability in designing, building and productionizing machine learning models in real world scenarios Strong expertise in Python and in machine learning libraries/frameworks such as TensorFlow, PyTorch, scikit-learn, pandas etc. Hands on experience with sagemaker and ability to independently deploy a model Exceptional ability to grasp domain specific nuances quickly. Bonus if there is demonstrated proficiency in applying machine learning to medical or life insurance domains Collaborative mindset, eagerness to learn and work with cross-functional teams Comfortable in a fast-paced startup environment Don’t meet every single requirement? If you’re excited about this role but your past experience doesn’t align perfectly with every qualification in the job description, we encourage you to apply anyway. At Ethos we are dedicated to building a diverse, inclusive and authentic workplace. We are an equal opportunity employer who values diversity and inclusion and look for applicants who understand, embrace and thrive in a multicultural world. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. Pursuant to the SF Fair Chance Ordinance, we will consider employment for qualified applicants with arrests and conviction records. To learn more about what information we collect and how it may be used, please refer to our California Candidate Privacy Notice. Show more Show less

Posted 2 months ago

Apply

8.0 years

6 - 7 Lacs

Noida

On-site

Job Description: Key Responsibilities Hands-on Development: Develop and implement machine learning models and algorithms, including supervised, unsupervised, deep learning, and reinforcement learning techniques. Implement Generative AI solutions using technologies like RAG (Retrieval-Augmented Generation), Vector DBs, and frameworks such as LangChain and Hugging Face, Agentic Ai. Utilize popular AI/ML frameworks and libraries such as TensorFlow, PyTorch, and scikit-learn. Design and deploy NLP models and techniques, including text classification, RNNs, CNNs, and Transformer-based models like BERT. Ensure robust end-to-end AI/ML solutions, from data preprocessing and feature engineering to model deployment and monitoring. Technical Proficiency: Demonstrate strong programming skills in languages commonly used for data science and ML, particularly Python. Leverage cloud platforms and services for AI/ML, especially AWS, with knowledge of AWS Sagemaker, Lambda, DynamoDB, S3, and other AWS resources. Mentorship: Mentor and coach a team of data scientists and machine learning engineers, fostering skill development and professional growth. Provide technical guidance and support, helping team members overcome challenges and achieve project goals. Set technical direction and strategy for AI/ML projects, ensuring alignment with business goals and objectives. Facilitate knowledge sharing and collaboration within the team, promoting best practices and continuous learning. Strategic Advisory: Collaborate with cross-functional teams to integrate AI/ML solutions into business processes and products. Provide strategic insights and recommendations to support decision-making processes. Communicate effectively with stakeholders at various levels, including technical and non-technical audiences. Qualifications Bachelor’s degree in a relevant field (e.g., Computer Science) or equivalent combination of education and experience. Typically, 8-10 years of relevant work experience in AI/ML/GenAI 12+ years of overall work experience. With proven ability to manage projects and activities. Extensive experience with generative AI technologies, including RAG, Vector DBs, and frameworks such as LangChain and Hugging Face, Agentic AI Proficiency in machine learning algorithms and techniques, including supervised and unsupervised learning, deep learning, and reinforcement learning. Extensive experience with AI/ML frameworks and libraries such as TensorFlow, PyTorch, and scikit-learn. Strong knowledge of natural language processing (NLP) techniques and models, including Transformer-based models like BERT. Proficient programming skills in Python and experience with cloud platforms like AWS. Experience with AWS Cloud Resources, including AWS Sagemaker, Lambda, DynamoDB, S3, etc., is a plus. Proven experience leading a team of data scientists or machine learning engineers on complex projects. Strong project management skills, with the ability to prioritize tasks, allocate resources, and meet deadlines. Excellent communication skills and the ability to convey complex technical concepts to diverse audiences. Preferred Qualifications Experience in setting technical direction and strategy for AI/ML projects. Experience in the Insurance domain Ability to mentor and coach junior team members, fostering growth and development. Proven track record of successfully managing AI/ML projects from conception to deployment. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .

Posted 2 months ago

Apply

7.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

About Hakkoda Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. As an AWS Managed Services Architect, you will play a pivotal role in architecting and optimizing the infrastructure and operations of a complex Data Lake environment for BOT clients. You’ll leverage your strong expertise with AWS services to design, implement, and maintain scalable and secure data solutions while driving best practices. You will work collaboratively with delivery teams across the U.S., Costa Rica, Portugal, and other regions, ensuring a robust and seamless Data Lake architecture. In addition, you’llproactively engage with clients to support their evolving needs, oversee critical AWS infrastructure, and guide teams toward innovative and efficient solutions. This role demands a hands-on approach, including designing solutions, troubleshooting,optimizing performance, and maintaining operational excellence. Role Description AWS Data Lake Architecture: Design, build, and support scalable, high-performance architectures for complex AWS Data Lake solutions. AWS Services Expertise: Deploy and manage cloud-native solutions using a wide range of AWS services, including but not limited to- Amazon EMR (Elastic MapReduce): Optimize and maintain EMR clusters for large-scale big data processing. AWS Batch: Design and implement efficient workflows for batch processing workloads. Amazon SageMaker: Enable data science teams with scalable infrastructure for model training and deployment. AWS Glue: Develop ETL/ELT pipelines using Glue to ensure efficient data ingestion and transformation. AWS Lambda: Build serverless functions to automate processes and handle event-driven workloads. IAM Policies: Define and enforce fine-grained access controls to secure cloud resources and maintain governance. AWS IoT & Timestream: Design scalable solutions for collecting, storing, and analyzing time-series data. Amazon DynamoDB: Build and optimize high-performance NoSQL database solutions. Data Governance & Security: Implement best practices to ensure data privacy, compliance, and governance across the data architecture. Performance Optimization: Monitor, analyze, and tune AWS resources for performance efficiency and cost optimization. Develop and manage Infrastructure as Code (IaC) using AWS CloudFormation, Terraform, or equivalent tools to automate infrastructure deployment. Client Collaboration: Work closely with stakeholders to understand business objectives and ensure solutions align with client needs. Team Leadership & Mentorship: Provide technical guidance to delivery teams through design reviews, troubleshooting, and strategic planning. Continuous Innovation: Stay current with AWS service updates, industry trends, and emerging technologies to enhance solution delivery. Documentation & Knowledge Sharing: Create and maintain architecture diagrams, SOPs, and internal/external documentation to support ongoing operations and collaboration. Qualifications 7+ years of hands-on experience in cloud architecture and infrastructure (preferably AWS). 3+ years of experience specifically in architecting and managing Data Lake or big datadata solutions on AWS. Bachelor’s Degree in Computer Science, Information Systems, or a related field (preferred) AWS Certifications such as Solutions Architect Professional or Big Data Specialty. Experience with Snowflake, Matillion, or Fivetran in hybrid cloud environments. Familiarity with Azure or GCP cloud platforms. Understanding of machine learning pipelines and workflows. Technical Skills: Expertise in AWS services such as EMR, Batch, SageMaker, Glue, Lambda,IAM, IoT TimeStream, DynamoDB, and more. Strong programming skills in Python for scripting and automation. Proficiency in SQL and performance tuning for data pipelines and queries. Experience with IaC tools like Terraform or CloudFormation. Knowledge of big data frameworks such as Apache Spark, Hadoop, or similar. Data Governance & Security: Proven ability to design and implement secure solutions, with strong knowledge of IAM policies and compliance standards. Problem-Solving:Analytical and problem-solving mindset to resolve complex technical challenges. Collaboration:Exceptional communication skills to engage with technical and non-technicalstakeholders. Ability to lead cross-functional teams and provide mentorship. Benefits Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? 🚀 💻 Apply today👇 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here. Show more Show less

Posted 2 months ago

Apply

2.0 years

0 Lacs

India

On-site

This role is for one of our clients Industry: Technology, Information and Media Seniority level: Associate level Min Experience: 2 years Location: India JobType: full-time About The Role We are looking for a proactive and skilled AWS Developer to join our dynamic team focused on cloud infrastructure and AI-driven solutions. In this role, you will architect, deploy, and maintain scalable and secure cloud environments on AWS, supporting the development and operationalization of machine learning models and AI applications. You will collaborate closely with data scientists, developers, and DevOps teams to ensure seamless integration and robust performance of AI workloads in the cloud. What You’ll Do Architect and build highly available, fault-tolerant, and scalable AWS infrastructure tailored for AI and machine learning workloads. Deploy, manage, and monitor AI/ML models in production using AWS services such as SageMaker, Lambda, EC2, ECS, and EKS. Partner with AI and ML teams to translate model requirements into effective cloud architectures and operational workflows. Automate infrastructure deployment and management through Infrastructure as Code (IaC) using Terraform, CloudFormation, or similar tools. Implement and optimize CI/CD pipelines to streamline model training, validation, and deployment processes. Monitor cloud environments and AI workloads proactively to identify and resolve performance bottlenecks or security vulnerabilities. Enforce best practices for data security, compliance, and governance in handling AI datasets and inference endpoints. Stay updated with AWS advancements and emerging tools to continuously enhance AI infrastructure capabilities. Support troubleshooting efforts, perform root cause analysis, and document solutions to maintain high system reliability. Who You Are 2+ years of hands-on experience working with AWS cloud services, especially in deploying and managing AI/ML workloads. Strong knowledge of AWS core services including S3, EC2, Lambda, SageMaker, IAM, CloudWatch, ECR, ECS, EKS, and CloudFormation. Experience deploying machine learning models into production environments and maintaining their lifecycle. Proficient in scripting and programming languages such as Python, Bash, or Node.js for automation and orchestration tasks. Skilled with containerization and orchestration tools such as Docker and Kubernetes (EKS). Familiar with monitoring and alerting solutions like AWS CloudWatch, Prometheus, or Grafana. Understanding of CI/CD methodologies and tools like Jenkins, GitHub Actions, or AWS CodePipeline. Bachelor’s degree in Computer Science, Engineering, or a related technical discipline. Bonus Points For AWS certifications such as AWS Certified Machine Learning – Specialty or AWS Solutions Architect. Hands-on experience with MLOps frameworks (Kubeflow, MLflow) and model version control. Familiarity with big data processing tools like Apache Spark, AWS Glue, or Redshift. Experience working in Agile or Scrum-based development environments. Show more Show less

Posted 2 months ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Our Company Teradata is the connected multi-cloud data platform for enterprise analytics company. Our enterprise analytics solve business challenges from start to scale. Only Teradata gives you the flexibility to handle the massive and mixed data workloads of the future, today. The Teradata Vantage architecture is cloud native, delivered as-a-service, and built on an open ecosystem. These design features make Vantage the ideal platform to optimize price performance in a multi-cloud environment. What You’ll Do The Principal Data Scientist (pre-sales) is an experienced and expert Data Scientist, able to provide industry thought-leadership on Analytics and its application across industries and across use-cases. The Principal Data Scientist supports the account team in framing business problems and in identifying analytic solutions that leverage Teradata technology and that are disruptive, innovative - and above all, practical. An articulate and compelling communicator, the Principal Data Scientist establishes our position as an important partner for advanced analytics with customers and prospects and is a trusted advisor to executives, senior managers and fellow data scientists alike across a range of target accounts. They are also a hands-on practitioner who is ready, willing and able to roll-up her sleeves and to deliver POC and short-term pre-sales engagements. The Principal Data Scientist has an excellent theoretical and practical understanding of statistics and machine learning and has a strong track record of applying this understanding at scale to drive business benefit. They are insanely curious and is a natural problem-solver and able to effectively promote Teradata technology and solutions to our customers. Who You’ll Work With The successful candidate will work with other expert team members to Provide pre-sales support at an executive level to the Teradata account teams at a local country, Geo and an International Theatre level. Helping them to position and sell complex Analytic solutions that drive sales of Teradata software. Provide strategic pre-sales consulting to executives and senior managers in our target market. Support the delivery of PoC and PoV projects that demonstrate the viability and applicability of Analytic use-cases and the superiority of Teradata solutions and services. Work with the extended Account team, and Sales Analytics Specialists to develop new Analytic propositions that are aligned with industry trends and customer requirements. What Makes You a Qualified Candidate Have proven hands-on experience of complex analytics at scale for example in the areas of IoT and sensor data. Have experience with Teradata partner’s analytical products, Cloud Service providers such as AzureML and Sagemaker and partner products such as Dataiku and H2O Have strong hands-on programming skills in at least one major analytic programming language and/or tool in addition to SQL Strong understanding of data engineering and database systems. Recognised in the local country, geo and International Theatre as the go-to expert What You’ll Bring An expertise in Data Science with a strong theoretical grounding in statistics, advanced analytics, and machine learning and at least 10 years real-world experience in the application of advanced analytics. A passion about knowledge sharing and demonstrate a commitment to continuous professional development. A Belief in Teradata's Analytic solutions and services and be a commitment to working with the product, engineering, and consulting teams to ensure that they continue to lead the market An ability to turn complex technical subject matter into relatable easy to digest and understand content for senior audiences. a degree level qualification (preferably Masters or PHD) in Statistics, Data Science, the physical or biological sciences or a related discipline Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. ​ We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status. Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

India

Remote

Job description Job description This is a permanent work from home position from anywhere in India Notice period - less than 30 Days (Immediate joiners preferred) We are seeking a Generative AI Engineer with 5 + years of experience in machine learning, deep learning, and large language models (LLMs) . The ideal candidate will lead the design, development, and deployment of AI-driven solutions for text, image, and speech generation using cutting-edge GenAI frameworks and cloud platforms . Key Responsibilities: Develop and fine-tune Generative AI models (LLMs, GANs, Diffusion Models, VAEs). Implement NLP, computer vision, and speech-based AI applications . Optimize model performance, scalability, and efficiency for production use. Work with transformer architectures (GPT, BERT, T5, LLaMA, etc.). Deploy AI models on AWS, Azure, or GCP using MLOps and containerization. Design LLM-based applications using LangChain, vector databases, and prompt engineering . Collaborate with cross-functional teams to integrate AI solutions into enterprise applications . Stay ahead of AI/ML trends and advancements to drive innovation. Required Skills: GenAI Frameworks TensorFlow, PyTorch, Hugging Face, OpenAI API LLM Fine-tuning, RAG (Retrieval-Augmented Generation), Prompt Engineering Cloud AI Services AWS SageMaker, Azure OpenAI, Google Vertex AI Programming & Data Engineering Python, PyTorch, LangChain, SQL, NoSQL MLOps & Deployment – Docker, Kubernetes, CI/CD, Vector Databases (FAISS, Pinecone) Show more Show less

Posted 2 months ago

Apply

10.0 years

0 Lacs

Maharashtra, India

On-site

Our Company Teradata is the connected multi-cloud data platform for enterprise analytics company. Our enterprise analytics solve business challenges from start to scale. Only Teradata gives you the flexibility to handle the massive and mixed data workloads of the future, today. The Teradata Vantage architecture is cloud native, delivered as-a-service, and built on an open ecosystem. These design features make Vantage the ideal platform to optimize price performance in a multi-cloud environment. What You’ll Do The Principal Data Scientist (pre-sales) is an experienced and expert Data Scientist, able to provide industry thought-leadership on Analytics and its application across industries and across use-cases. The Principal Data Scientist supports the account team in framing business problems and in identifying analytic solutions that leverage Teradata technology and that are disruptive, innovative - and above all, practical. An articulate and compelling communicator, the Principal Data Scientist establishes our position as an important partner for advanced analytics with customers and prospects and is a trusted advisor to executives, senior managers and fellow data scientists alike across a range of target accounts. They are also a hands-on practitioner who is ready, willing and able to roll-up her sleeves and to deliver POC and short-term pre-sales engagements. The Principal Data Scientist has an excellent theoretical and practical understanding of statistics and machine learning and has a strong track record of applying this understanding at scale to drive business benefit. They are insanely curious and is a natural problem-solver and able to effectively promote Teradata technology and solutions to our customers. Who You’ll Work With The successful candidate will work with other expert team members to Provide pre-sales support at an executive level to the Teradata account teams at a local country, Geo and an International Theatre level. Helping them to position and sell complex Analytic solutions that drive sales of Teradata software. Provide strategic pre-sales consulting to executives and senior managers in our target market. Support the delivery of PoC and PoV projects that demonstrate the viability and applicability of Analytic use-cases and the superiority of Teradata solutions and services. Work with the extended Account team, and Sales Analytics Specialists to develop new Analytic propositions that are aligned with industry trends and customer requirements. What Makes You a Qualified Candidate Have proven hands-on experience of complex analytics at scale for example in the areas of IoT and sensor data. Have experience with Teradata partner’s analytical products, Cloud Service providers such as AzureML and Sagemaker and partner products such as Dataiku and H2O Have strong hands-on programming skills in at least one major analytic programming language and/or tool in addition to SQL Strong understanding of data engineering and database systems. Recognised in the local country, geo and International Theatre as the go-to expert What You’ll Bring An expertise in Data Science with a strong theoretical grounding in statistics, advanced analytics, and machine learning and at least 10 years real-world experience in the application of advanced analytics. A passion about knowledge sharing and demonstrate a commitment to continuous professional development. A Belief in Teradata's Analytic solutions and services and be a commitment to working with the product, engineering, and consulting teams to ensure that they continue to lead the market An ability to turn complex technical subject matter into relatable easy to digest and understand content for senior audiences. a degree level qualification (preferably Masters or PHD) in Statistics, Data Science, the physical or biological sciences or a related discipline Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. ​ We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status. Show more Show less

Posted 2 months ago

Apply

10.0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

Our Company Teradata is the connected multi-cloud data platform for enterprise analytics company. Our enterprise analytics solve business challenges from start to scale. Only Teradata gives you the flexibility to handle the massive and mixed data workloads of the future, today. The Teradata Vantage architecture is cloud native, delivered as-a-service, and built on an open ecosystem. These design features make Vantage the ideal platform to optimize price performance in a multi-cloud environment. What You’ll Do The Principal Data Scientist (pre-sales) is an experienced and expert Data Scientist, able to provide industry thought-leadership on Analytics and its application across industries and across use-cases. The Principal Data Scientist supports the account team in framing business problems and in identifying analytic solutions that leverage Teradata technology and that are disruptive, innovative - and above all, practical. An articulate and compelling communicator, the Principal Data Scientist establishes our position as an important partner for advanced analytics with customers and prospects and is a trusted advisor to executives, senior managers and fellow data scientists alike across a range of target accounts. They are also a hands-on practitioner who is ready, willing and able to roll-up her sleeves and to deliver POC and short-term pre-sales engagements. The Principal Data Scientist has an excellent theoretical and practical understanding of statistics and machine learning and has a strong track record of applying this understanding at scale to drive business benefit. They are insanely curious and is a natural problem-solver and able to effectively promote Teradata technology and solutions to our customers. Who You’ll Work With The successful candidate will work with other expert team members to Provide pre-sales support at an executive level to the Teradata account teams at a local country, Geo and an International Theatre level. Helping them to position and sell complex Analytic solutions that drive sales of Teradata software. Provide strategic pre-sales consulting to executives and senior managers in our target market. Support the delivery of PoC and PoV projects that demonstrate the viability and applicability of Analytic use-cases and the superiority of Teradata solutions and services. Work with the extended Account team, and Sales Analytics Specialists to develop new Analytic propositions that are aligned with industry trends and customer requirements. What Makes You a Qualified Candidate Have proven hands-on experience of complex analytics at scale for example in the areas of IoT and sensor data. Have experience with Teradata partner’s analytical products, Cloud Service providers such as AzureML and Sagemaker and partner products such as Dataiku and H2O Have strong hands-on programming skills in at least one major analytic programming language and/or tool in addition to SQL Strong understanding of data engineering and database systems. Recognised in the local country, geo and International Theatre as the go-to expert What You’ll Bring An expertise in Data Science with a strong theoretical grounding in statistics, advanced analytics, and machine learning and at least 10 years real-world experience in the application of advanced analytics. A passion about knowledge sharing and demonstrate a commitment to continuous professional development. A Belief in Teradata's Analytic solutions and services and be a commitment to working with the product, engineering, and consulting teams to ensure that they continue to lead the market An ability to turn complex technical subject matter into relatable easy to digest and understand content for senior audiences. a degree level qualification (preferably Masters or PHD) in Statistics, Data Science, the physical or biological sciences or a related discipline Why We Think You’ll Love Teradata We prioritize a people-first culture because we know our people are at the very heart of our success. We embrace a flexible work model because we trust our people to make decisions about how, when, and where they work. We focus on well-being because we care about our people and their ability to thrive both personally and professionally. We are an anti-racist company because our dedication to Diversity, Equity, and Inclusion is more than a statement. It is a deep commitment to doing the work to foster an equitable environment that celebrates people for all of who they are. Teradata invites all identities and backgrounds in the workplace. We work with deliberation and intent to ensure we are cultivating collaboration and inclusivity across our global organization. ​ We are proud to be an equal opportunity and affirmative action employer. We do not discriminate based upon race, color, ancestry, religion, creed, sex (including pregnancy, childbirth, breastfeeding, or related conditions), national origin, sexual orientation, age, citizenship, marital status, disability, medical condition, genetic information, gender identity or expression, military and veteran status, or any other legally protected status. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

● Minimum of (4+) years of experience in AI-based application development. ● Fine-tune pre-existing models to improve performance and accuracy. ● Experience with TensorFlow or PyTorch, Scikit-learn, or similar ML frameworks and familiarity with APIs like OpenAI or vertex AI ● Experience with NLP tools and libraries (e.g., NLTK, SpaCy, GPT, BERT). ● Implement frameworks like LangChain, Anthropics Constitutional AI, OpenAIs, Hugging Face, and Prompt Engineering techniques to build robust and scalable AI applications. ● Evaluate and analyze RAG solution and Utilise the best-in-class LLM to define customer experience solutions (Fine tune Large Language models (LLM)). ● Architect and develop advanced generative AI solutions leveraging state-of-the-art language models (LLMs) such as GPT, LLaMA, PaLM, BLOOM, and others. ● Strong understanding and experience with open-source multimodal LLM models to customize and create solutions. ● Explore and implement cutting-edge techniques like Few-Shot Learning, Reinforcement Learning, Multi-Task Learning, and Transfer Learning for AI model training and fine-tuning. ● Proficiency in data preprocessing, feature engineering, and data visualization using tools like Pandas, NumPy, and Matplotlib. ● Optimize model performance through experimentation, hyperparameter tuning, and advanced optimization techniques. ● Proficiency in Python with the ability to get hands-on with coding at a deep level. ● Develop and maintain APIs using Python's FastAPI, Flask, or Django for integrating AI capabilities into various systems. ● Ability to write optimized and high-performing scripts on relational databases (e.g., MySQL, PostgreSQL) or non-relational database (e.g., MongoDB or Cassandra) ● Enthusiasm for continuous learning and professional developement in AI and leated technologies. ● Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. ● Knowledge of cloud services like AWS, Google Cloud, or Azure. ● Proficiency with version control systems, especially Git. ● Familiarity with data pre-processing techniques and pipeline development for Al model training. ● Experience with deploying models using Docker, Kubernetes ● Experience with AWS Bedrock, and Sagemaker is a plus ● Strong problem-solving skills with the ability to translate complex business problems into Al solutions. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

ProArch is seeking an experienced AWS Data Engineer to join our team. As an AWS Data Engineer, you will be responsible for designing, building, and maintaining data solutions on the AWS platform. Job Description: Must to Have Skills : AWS - Data Engineer - PySpark, Glue, S3, Athena To work in the capacity of AWS Cloud developer Scripting/programming in Python/Pyspark Design/Develop solutions as per the specification Able to translate functional and technical requirements into detailed design Work with partners for regular updates, requirement understanding and design discussions AWS Cloud platform services stack - S3, EC2, EMR, Lambda, RDS, DynamoDB, Kinesis, Sagemaker, Athena etc SQL Knowledge Exposure to Data warehousing concepts like - Data Warehouse, Data Lake, Dimensions etc Good communication skills are must Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We are giving preference to candidates who are available to join immediately or within the month of June. Machine Learning Engineer (Python, AWS) We are seeking an experienced Machine Learning Engineer with 5+ years of hands-on experience in developing and deploying ML solutions. The ideal candidate will have strong Python programming skills and a proven track record working with AWS services for machine learning. Responsibilities: Design, develop, and deploy scalable machine learning models. Implement and optimize ML algorithms using Python. Leverage AWS services (e.g., Sagemaker, EC2, S3, Lambda) for ML model training, deployment, and monitoring. Collaborate with data scientists and other engineers to bring ML solutions to production. Ensure the performance, reliability, and scalability of ML systems. Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5+ years of professional experience as a Machine Learning Engineer. Expertise in Python programming for machine learning. Strong experience with AWS services for ML (SageMaker, EC2, S3, Lambda, etc.). Solid understanding of machine learning algorithms and principles. Experience with MLOps practices is a plus. Show more Show less

Posted 2 months ago

Apply

4.0 years

0 Lacs

India

Remote

Job Title: Data Scientist Location: Remote Job Type: Full-Time | Permanent Experience Required: 4+ Years About the Role: We are looking for a highly motivated and analytical Data Scientist with 4 years of industry experience to join our data team. The ideal candidate will have a strong background in Python , SQL , and experience deploying machine learning models using AWS SageMaker . You will be responsible for solving complex business problems with data-driven solutions, developing models, and helping scale machine learning systems into production environments. Key Responsibilities: Model Development: Design, develop, and validate machine learning models for classification, regression, and clustering tasks. Work with structured and unstructured data to extract actionable insights and drive business outcomes. Deployment & MLOps: Deploy machine learning models using AWS SageMaker , including model training, tuning, hosting, and monitoring. Build reusable pipelines for model deployment, automation, and performance tracking. Data Exploration & Feature Engineering: Perform data wrangling, preprocessing, and feature engineering using Python and SQL . Conduct EDA (exploratory data analysis) to identify patterns and anomalies. Collaboration: Work closely with data engineers, product managers, and business stakeholders to define data problems and deliver scalable solutions. Present model results and insights to both technical and non-technical audiences. Continuous Improvement: Stay updated on the latest advancements in machine learning, AI, and cloud technologies. Suggest and implement best practices for experimentation, model governance, and documentation. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or related field. 4+ years of hands-on experience in data science, machine learning, or applied AI roles. Proficiency in Python for data analysis, model development, and scripting. Strong SQL skills for querying and manipulating large datasets. Hands-on experience with AWS SageMaker , including model training, deployment, and monitoring. Solid understanding of machine learning algorithms and techniques (supervised/unsupervised). Familiarity with libraries such as Pandas, NumPy, Scikit-learn, Matplotlib, and Seaborn. Preferred Qualifications (Nice to Have): Experience with MLOps tools (e.g., MLflow, SageMaker Pipelines). Exposure to deep learning frameworks like TensorFlow or PyTorch. Knowledge of AWS data ecosystem (e.g., S3, Redshift, Athena). Experience in A/B testing or statistical experimentation Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Company Our client is a trusted global innovator of IT and business services. We help clients transform through consulting, industry solutions, business process services, digital & IT modernization and managed services. Our client enables them, as well as society, to move confidently into the digital future. We are committed to our clients’ long-term success and combine global reach with local client attention to serve them in over 50 countries around the globe Job Title: Senior AI Cloud Operations Engineer Location: Chennai Experience: 4 to 5 yrs Job Type : Contract to hire Notice Period:- Immediate joiner OffShore Profile Summary: We’re looking for a Senior AI Cloud Operations Engineer to start building a new for AI Cloud Operations team, starting with this strategic position. We are searching for an experienced Senior AI Cloud Operations Engineer with deep expertise in AI technologies to lead our cloud-based AI infrastructure management. This role is integral to ensuring our AI systems' scalability, reliability, and performance, enabling us to deliver cutting-edge solutions. The ideal candidate will have a robust understanding of machine learning frameworks, cloud services architecture, and operations management. Key Responsibilities: Cloud Architecture Design: Design, architect, and manage scalable cloud infrastructure tailored for AI workloads, leveraging platforms like AWS, Azure, or Google Cloud. System Monitoring and Optimization: Implement comprehensive monitoring solutions to ensure high availability and swift performance, utilizing tools like Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: Work closely with data scientists to operationalize AI models, ensuring seamless integration with existing systems and workflows. Familiarity with tools such as MLflow or TensorFlow Serving can be beneficial. Automation and Orchestration: Develop automated deployment pipelines using orchestration tools like Kubernetes and Terraform to streamline operations and reduce manual interventions. Security and Compliance: Ensure that all cloud operations adhere to security best practices and compliance standards, including data privacy regulations like GDPR or HIPAA. Documentation and Reporting: Create and maintain detailed documentation of cloud configurations, procedures, and operational metrics to foster transparency and continuous improvement. Performance Tuning: Conduct regular performance assessments and implement strategies to optimize cloud resource utilization and reduce costs without compromising system effectiveness. Issue Resolution: Rapidly identify, diagnose, and resolve technical issues, minimizing downtime and ensuring maximum uptime. Qualifications: Educational Background: Bachelor’s degree in Computer Science, Engineering, or a related field. Master's degree preferred. Professional Experience: 5+ years of extensive experience in cloud operations, particularly within AI environments. Demonstrated expertise in deploying and managing complex AI systems in cloud settings. Technical Expertise: Deep knowledge of cloud platforms (AWS, Azure, Google Cloud) including their AI-specific services such as AWS SageMaker or Google AI Platform. AI/ML Proficiency: In-depth understanding of AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, along with experience in ML model lifecycle management. Infrastructure as Code: Proficiency in infrastructure-as-code tools such as Terraform and AWS CloudFormation to automate and manage cloud deployment processes. Containerization and Microservices: Expertise in managing containerized applications using Docker and orchestrating services with Kubernetes. Soft Skills: Strong analytical, problem-solving, and communication skills, with the ability to work effectively both independently and in collaboration with cross-functional teams. Preferred Qualifications: Advanced certifications in cloud services, such as AWS Certified Solutions Architect or Google Cloud Professional Data Engineer. Experience in advanced AI techniques such as deep learning or reinforcement learning. Knowledge of emerging AI technologies and trends to drive innovation within existing infrastructure. List of Used Tools: Cloud Provider: Azure, AWS or Google. Performance & monitor: Prometheus, Grafana, or CloudWatch. Collaboration and Model Deployment: MLflow or TensorFlow Serving Automation and Orchestration: Kubernetes and Terraform Security and Compliance: Data privacy regulations like GDPR or HIPAA. Qualifications Bachelor's degree in Computer Science (or related field) Show more Show less

Posted 2 months ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana

On-site

Senior Cloud Engineer - AWS Hyderabad, India; Gurgaon, India Information Technology 315801 Job Description About The Role: Grade Level (for internal use): 10 S&P Global Commodity Insights The Role: Senior Cloud Engineer The Location: Hyderabad, Gurgaon The Team: The Cloud Engineering Team is responsible for designing, implementing, and maintaining cloud infrastructure that supports various applications and services within the S&P Global Commodity Insights organization. This team collaborates closely with data science, application development, and security teams to ensure the reliability, security, and scalability of our cloud solutions. The Impact: As a Cloud Engineer, you will play a vital role in deploying and managing cloud infrastructure that supports our strategic initiatives. Your expertise in AWS and cloud technologies will help streamline operations, enhance service delivery, and ensure the security and compliance of our environments. What’s in it for you: This position offers the opportunity to work on cutting-edge cloud technologies and collaborate with various teams across the organization. You will gain exposure to multiple S&P Commodity Insights Divisions and contribute to projects that have a significant impact on the business. This role opens doors for tremendous career opportunities within S&P Global. Responsibilities: Design and deploy cloud infrastructure using core AWS services such as EC2, S3, RDS, IAM, VPC, and CloudFront, ensuring high availability and fault tolerance. Deploy, manage, and scale Kubernetes clusters using Amazon EKS, ensuring high availability, secure networking, and efficient resource utilization. Develop secure, compliant AWS environments by configuring IAM roles/policies, KMS encryption, security groups, and VPC endpoints. Configure logging, monitoring, and alerting with CloudWatch, CloudTrail, and GuardDuty to support observability and incident response. Enforce security and compliance controls via IAM policy audits, patching schedules, and automated backup strategies. Monitor infrastructure health, respond to incidents, and maintain SLAs through proactive alerting and runbook execution. Collaborate with data science teams to deploy machine learning models using Amazon SageMaker, managing model training, hosting, and monitoring. Automate and schedule data processing workflows using AWS Glue, Step Functions, Lambda, and EventBridge to support ML pipelines. Optimize infrastructure for cost and performance using AWS Compute Optimizer, CloudWatch metrics, auto-scaling, and Reserved Instances/Savings Plans. Write and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation for repeatable, automated infrastructure deployments. Implement disaster recovery, backups, and versioned deployments using S3 versioning, RDS snapshots, and CloudFormation change sets. Set up and manage CI/CD pipelines using AWS services like CodePipeline, CodeBuild, and CodeDeploy to support application and model deployments. Manage and optimize real-time inference pipelines using SageMaker Endpoints, Amazon Bedrock, and Lambda with API Gateway to ensure reliable, scalable model serving. Support containerized AI workloads using Amazon ECS or EKS, including model serving and microservices for AI-based features. Collaborate with SecOps and SRE teams to uphold security baselines, manage change control, and conduct root cause analysis for outages. Participate in code reviews, design discussions, and architectural planning to ensure scalable and maintainable cloud infrastructure. Maintain accurate and up-to-date infrastructure documentation, including architecture diagrams, access control policies, and deployment processes. Collaborate cross-functionally with application, data, and security teams to align cloud solutions with business and technical goals. Stay current with AWS and AI/ML advancements, suggesting improvements or new service adoption where applicable. What We’re Looking For: Strong understanding of cloud infrastructure, particularly AWS services and Kubernetes. Proven experience in deploying and managing cloud solutions in a collaborative Agile environment. Ability to present technical concepts to both business and technical audiences. Excellent multi-tasking skills and the ability to manage multiple projects under tight deadlines. Basic Qualifications: BA/BS in computer science, information technology, or a related field. 5+ years of experience in cloud engineering or related roles, specifically with AWS. Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Knowledge of container orchestration and microservices architecture. Familiarity with security best practices in cloud environments. Preferred Qualifications: Extensive Hands-on Experience with AWS Services. Excellent problem-solving skills and the ability to work independently as well as part of a team. Strong communication skills and the ability to influence stakeholders at all levels. Experience with greenfield projects and building cloud infrastructure from scratch. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315801 Posted On: 2025-06-05 Location: Hyderabad, Telangana, India

Posted 2 months ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana

On-site

About the Role: Grade Level (for internal use): 10 S&P Global Commodity Insights The Role: Senior Cloud Engineer The Location: Hyderabad, Gurgaon The Team: The Cloud Engineering Team is responsible for designing, implementing, and maintaining cloud infrastructure that supports various applications and services within the S&P Global Commodity Insights organization. This team collaborates closely with data science, application development, and security teams to ensure the reliability, security, and scalability of our cloud solutions. The Impact: As a Cloud Engineer, you will play a vital role in deploying and managing cloud infrastructure that supports our strategic initiatives. Your expertise in AWS and cloud technologies will help streamline operations, enhance service delivery, and ensure the security and compliance of our environments. What’s in it for you: This position offers the opportunity to work on cutting-edge cloud technologies and collaborate with various teams across the organization. You will gain exposure to multiple S&P Commodity Insights Divisions and contribute to projects that have a significant impact on the business. This role opens doors for tremendous career opportunities within S&P Global. Responsibilities: Design and deploy cloud infrastructure using core AWS services such as EC2, S3, RDS, IAM, VPC, and CloudFront, ensuring high availability and fault tolerance. Deploy, manage, and scale Kubernetes clusters using Amazon EKS, ensuring high availability, secure networking, and efficient resource utilization. Develop secure, compliant AWS environments by configuring IAM roles/policies, KMS encryption, security groups, and VPC endpoints. Configure logging, monitoring, and alerting with CloudWatch, CloudTrail, and GuardDuty to support observability and incident response. Enforce security and compliance controls via IAM policy audits, patching schedules, and automated backup strategies. Monitor infrastructure health, respond to incidents, and maintain SLAs through proactive alerting and runbook execution. Collaborate with data science teams to deploy machine learning models using Amazon SageMaker, managing model training, hosting, and monitoring. Automate and schedule data processing workflows using AWS Glue, Step Functions, Lambda, and EventBridge to support ML pipelines. Optimize infrastructure for cost and performance using AWS Compute Optimizer, CloudWatch metrics, auto-scaling, and Reserved Instances/Savings Plans. Write and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation for repeatable, automated infrastructure deployments. Implement disaster recovery, backups, and versioned deployments using S3 versioning, RDS snapshots, and CloudFormation change sets. Set up and manage CI/CD pipelines using AWS services like CodePipeline, CodeBuild, and CodeDeploy to support application and model deployments. Manage and optimize real-time inference pipelines using SageMaker Endpoints, Amazon Bedrock, and Lambda with API Gateway to ensure reliable, scalable model serving. Support containerized AI workloads using Amazon ECS or EKS, including model serving and microservices for AI-based features. Collaborate with SecOps and SRE teams to uphold security baselines, manage change control, and conduct root cause analysis for outages. Participate in code reviews, design discussions, and architectural planning to ensure scalable and maintainable cloud infrastructure. Maintain accurate and up-to-date infrastructure documentation, including architecture diagrams, access control policies, and deployment processes. Collaborate cross-functionally with application, data, and security teams to align cloud solutions with business and technical goals. Stay current with AWS and AI/ML advancements, suggesting improvements or new service adoption where applicable. What We’re Looking For: Strong understanding of cloud infrastructure, particularly AWS services and Kubernetes. Proven experience in deploying and managing cloud solutions in a collaborative Agile environment. Ability to present technical concepts to both business and technical audiences. Excellent multi-tasking skills and the ability to manage multiple projects under tight deadlines. Basic Qualifications: BA/BS in computer science, information technology, or a related field. 5+ years of experience in cloud engineering or related roles, specifically with AWS. Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Knowledge of container orchestration and microservices architecture. Familiarity with security best practices in cloud environments. Preferred Qualifications: Extensive Hands-on Experience with AWS Services. Excellent problem-solving skills and the ability to work independently as well as part of a team. Strong communication skills and the ability to influence stakeholders at all levels. Experience with greenfield projects and building cloud infrastructure from scratch. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315801 Posted On: 2025-06-05 Location: Hyderabad, Telangana, India

Posted 2 months ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category Technology Experience Sr. Manager Primary Address Bangalore, Karnataka Overview Voyager (94001), India, Bangalore, Karnataka Senior Manager - Technical Program Management Senior Manager - Technical Program Management At Capital One India, we work in a fast paced and intellectually rigorous environment to solve fundamental business problems at scale. Using advanced analytics, data science and machine learning, we derive valuable insights about product and process design, consumer behavior, regulatory and credit risk, and more from large volumes of data, and use it to build cutting edge patentable products that drive the business forward. We’re looking for a Senior Manager - Technical Program Management (TPM) to join the Machine Learning Experience (MLX) team! The MLX team is at the forefront of how Capital One builds and deploys responsible ML models and features. We onboard and educate associates on the ML platforms and products that the whole company uses. We drive new innovation and research and we’re working to seamlessly infuse ML into the fabric of the company. The full ML experience we’re creating will enable our lines of business to focus their time and resources on advancing their specific machine learning objectives — all while continuing to deliver next-generation machine learning-driven products and services for our customers. As a Senior Manager, Technical Program Management (TPM) in the MLX team, you will execute on high priority enterprise level initiatives, and influence across our organization. Specifically, you will be partnering closely with product, engineering, data scientists, and other cross functional teams to create roadmaps, scope programs aligning them with business priorities, define milestones and success metrics, and build scalable, secure, reliable, efficient ML products and platforms. This role will be responsible for big picture thinking, presenting to executive stakeholders, and holding engineering teams accountable for overarching delivery goals. Our Senior Managers TPM have: Strong technical backgrounds (ideally building highly scalable platforms, products, or services) with the ability to proactively identify and mitigate technical risks throughout delivery life-cycle Exceptional communication and collaboration skills Excellent problem solving and influencing skills A quantitative approach to problem solving and a collaborative implementer to holistic solutions; a systems thinker Ability to simplify the technically complex and drive well-educated decisions across product, engineering, design, and data science representatives Deep focus on execution, follow-through, accountability, and results Exceptional cross-team collaboration; able to work across different functions, organizations, and reporting boundaries to get the job done. Highly tuned emotional intelligence, good listener, and deep seated empathy for teams and partners Ability to lead a program team focused on the building of enterprise Machine Learning capabilities Previous experience with machine learning (building models, deploying models, setting up cloud infrastructure and/or data pipelines) and familiarity with major ML frameworks such as XGBoost, PyTorch, AWS SageMaker, etc. Ability to manage program communications with key stakeholders at all levels across the company to enable transparency and timely information sharing Ability to serve as the connective tissue across functions, business units, bringing teams together to foster collaboration, improve decision-making, and deliver value for customers, end to end Basic Qualifications: Bachelor's degree At least 5 years of experience managing technical programs At least 5 years of experience designing and building data-intensive solutions using distributed computing At least 3 years of experience building highly scalable products & platforms Preferred Qualifications: 3+ years of experience in building distributed systems & highly available services using cloud computing services / architecture - preferably using AWS 3+ years of experience with Agile delivery 3+ years of experience delivering large and complex programs - where you own the business or technical vision, collaborate with large cross-functional teams, secure commitments on deliverables and unblock teams to land business impact 2+ years of Machine Learning experience Experience in building systems and solutions within a highly regulated environment Bachelor's degree in a related technical field (Computer Science, Software Engineering) MBA or Master’s Degree in a related technical field (Computer Science, Software Engineering) or equivalent experience PMP, Lean, Agile, or Six Sigma certification No agencies please. Capital One is an equal opportunity employer (EOE, including disability/vet) committed to non-discrimination in compliance with applicable federal, state, and local laws. Capital One promotes a drug-free workplace. Capital One will consider for employment qualified applicants with a criminal history in a manner consistent with the requirements of applicable laws regarding criminal background inquiries, including, to the extent applicable, Article 23-A of the New York Correction Law; San Francisco, California Police Code Article 49, Sections 4901-4920; New York City’s Fair Chance Act; Philadelphia’s Fair Criminal Records Screening Act; and other applicable federal, state, and local laws and regulations regarding criminal background inquiries. If you have visited our website in search of information on employment opportunities or to apply for a position, and you require an accommodation, please contact Capital One Recruiting at 1-800-304-9102 or via email at RecruitingAccommodation@capitalone.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodations. For technical support or questions about Capital One's recruiting process, please send an email to Careers@capitalone.com Capital One does not provide, endorse nor guarantee and is not liable for third-party products, services, educational tools or other information available through this site. Capital One Financial is made up of several different entities. Please note that any position posted in Canada is for Capital One Canada, any position posted in the United Kingdom is for Capital One Europe and any position posted in the Philippines is for Capital One Philippines Service Corp. (COPSSC). This carousel contains a column of headings. Selecting a heading will change the main content in the carousel that follows. Use the Previous and Next buttons to cycle through all the options, use Enter to select. This carousel shows one item at a time. Use the preceding navigation carousel to select a specific heading to display the content here. How We Hire We take finding great coworkers pretty seriously. Step 1 Apply It only takes a few minutes to complete our application and assessment. Step 2 Screen and Schedule If your application is a good match you’ll hear from one of our recruiters to set up a screening interview. Step 3 Interview(s) Now’s your chance to learn about the job, show us who you are, share why you would be a great addition to the team and determine if Capital One is the place for you. Step 4 Decision The team will discuss — if it’s a good fit for us and you, we’ll make it official! How to Pick the Perfect Career Opportunity Overwhelmed by a tough career choice? Read these tips from Devon Rollins, Senior Director of Cyber Intelligence, to help you accept the right offer with confidence. Your wellbeing is our priority Our benefits and total compensation package is designed for the whole person. Caring for both you and your family. Healthy Body, Healthy Mind You have options and we have the tools to help you decide which health plans best fit your needs. Save Money, Make Money Secure your present, plan for your future and reduce expenses along the way. Time, Family and Advice Options for your time, opportunities for your family, and advice along the way. It’s time to BeWell. Career Journey Here’s how the team fits together. We’re big on growth and knowing who and how coworkers can best support you.

Posted 2 months ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Details As an AI-First AI/ML Engineer, you'll be architecting and deploying intelligent systems that leverage cutting-edge AI technologies including LangChain orchestration, autonomous AI agents, and robust AWS cloud infrastructure. We are seeking expertise in modern AI/ML frameworks, agentic systems, and scalable backend development using Node.js and Python. Your AI-powered engineering approach will create sophisticated machine learning solutions that drive autonomous decision-making and solve complex business challenges at enterprise scale. About You You are an AI/ML specialist who has fully embraced AI-first development methodologies, using advanced AI tools (e.g., Copilot, ChatGPT, Claude, CodeLlama) to accelerate your machine learning workflows. You're equally comfortable building LangChain orchestration pipelines, deploying Hugging Face models, developing autonomous AI agents, and architecting scalable AWS backend systems using Node.js and Python. You move FAST - capable of shipping complete, production-ready features within 1 week cycles. You are a proactive person and a go-getter, willing to go the extra mile. You understand that modern AI engineering means creating intelligent systems that can reason, learn, and act autonomously while maintaining reliability and performance. You thrive using TDD methods, MLOps practices, and Agile methodologies while focusing on finding elegant solutions to complex AI challenges. This is a hybrid role where you'll be spending your time across 4 core functions: Internal Projects (25%) - Building and maintaining OneSeven's internal AI tools and platforms Sales Engineering (25%) - Supporting sales team with technical demos, proof-of-concepts, and client presentations AI-First Engineering and Innovation Sprints (25%) - Rapid prototyping and innovation on cutting-edge AI technologies Forward Deployed Engineering (25%) - Working directly with clients on-site or embedded in their teams to deliver solutions Qualifications Technical Requirements Core AI/ML Skills 4+ years AI/ML development experience with production deployment Fluent English required - strong written and verbal communication skills for direct client interaction Reliable workspace/internet - willing to work extra hours FAST execution mindset - must be able to ship complete features within 1 week Strong system architecture experience - designing scalable, distributed AI/ML systems Expert-level LangChain experience for AI orchestration and workflow management Hugging Face experience - transformers, model integration, and deployment Extensive AI Agent development with LangChain or Google Vertex AI Heavy AWS cloud experience, particularly with Bedrock, SageMaker, and AI/ML services Backend generalist comfortable with Node.js and Python for AI service development Agile methodologies experience, startup environment passion Independent problem-solver, team player willing to work extra hours AI Agent & LangChain Expertise (Required) LangChain framework mastery for complex AI workflow orchestration Hugging Face integration - transformers, model deployment, and API integration AI Agent architecture design with LangChain or Google Vertex AI Prompt engineering and chain-of-thought optimization Vector databases and embedding systems (Pinecone, Pgvector, Chroma) RAG pipeline development and optimization LLM integration across multiple providers (OpenAI, Anthropic, AWS Bedrock, Hugging Face) Agentic system design with memory, planning, and execution capabilities Backend & Cloud Infrastructure Heavy AWS Cloud services experience (Lambda, API Gateway, S3, RDS, SageMaker, Bedrock) System architecture design for high-scale, distributed AI/ML applications Microservices architecture and design patterns for AI systems at scale Node.js and Python backend development for AI service APIs RESTful API design and GraphQL for AI service integration Database design and management for AI data workflows Modern JavaScript/TypeScript and Python async programming MLOps & Integration CI/CD pipelines and GitHub Actions for ML model deployment Model versioning, monitoring, and automated retraining workflows Container orchestration (Docker, Kubernetes) for AI services Performance optimization for high-throughput AI systems Modern authentication and secure API design for AI endpoints API security implementation (XSS, CSRF protection) Bonus Qualifications Advanced Hugging Face experience (fine-tuning, custom models, optimization) Multi-modal AI experience (vision, audio, text processing) Advanced prompt engineering and fine-tuning experience DevOps and infrastructure as code (Terraform, CloudFormation) Database optimization for vector search and AI workloads Additional cloud platforms (Azure AI, Google Vertex AI) Knowledge graph integration and semantic reasoning Project Deliverables You'll be working on building a comprehensive AI-powered business intelligence system with autonomous agent capabilities. Key deliverables include: Core AI Agent Platform Multi-agent orchestration system with LangChain workflow management Autonomous reasoning agents with tool integration and decision-making capabilities Intelligent document processing pipeline with advanced OCR and classification Real-time AI analysis dashboard with predictive insights and recommendations Advanced AI Workflows RAG-powered knowledge synthesis with multi-source data integration Automated business process agents with approval workflows and notifications AI-driven anomaly detection with proactive alerting and response systems Intelligent API orchestration with dynamic routing and load balancing Comprehensive agent performance monitoring with usage analytics and optimization insights Integration & Deployment Systems Scalable AWS backend infrastructure with auto-scaling AI services Production MLOps pipeline with automated model deployment and monitoring Multi-tenant AI service architecture with usage tracking and billing integration Real-time AI API gateway with rate limiting and authentication Benefits/Compensation Fully Remote, Contract-based with U.S. company $4,000/mo - $8,000/mo depending on experience and project duration Company-paid PTO plan, international team of 15+ To Apply SEND YOUR RESUME IN ENGLISH, please. Include the URL of your LinkedIn profile. Include Website references, GitHub repositories, and any other online references that would highlight your prior work for the qualifications described in this role. ⚠️ AUTOMATIC DISQUALIFICATION: You will be automatically disqualified if your resume is not in English or you don't include your LinkedIn profile URL. About OneSeven Tech OneSeven Tech is a premier digital product studio serving both high-growth startups and established enterprises. We've partnered with startup clients who have collectively raised over $100M in Venture Capital, while our enterprise portfolio includes 2000+ person hospitality groups and publicly traded NASDAQ companies. Our passion lies in crafting exceptional AI-powered digital products that drive real business success. Joining OneSeven means working alongside a skillful team of consultants where you'll sharpen your AI/ML expertise, expand your capabilities, and contribute to cutting-edge solutions for industry-leading clients. OST's headquarters is in Miami, Florida, but our employees work remotely worldwide. Our 3 main locations are Miami, Mexico City, Mexico, and Buenos Aires, Argentina. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Description The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About The Team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications Experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Proven track record in designing and developing end-to-end Machine Learning and Generative AI solutions, from conception to deployment Experience in applying best practices and evaluating alternative and complementary ML and foundational models suitable for given business contexts Foundational knowledge of data modeling principles, statistical analysis methodologies, and demonstrated ability to extract meaningful insights from complex, large-scale datasets Experience in mentoring junior team members, and guiding them on machine learning and data modeling applications Preferred Qualifications AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in developing and optimizing foundation models (LLMs), including fine-tuning, continuous training, small language model development, and implementation of Agentic AI systems Experience in developing and deploying end-to-end machine learning and deep learning solutions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS ProServe IN - Karnataka Job ID: A2941027 Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities As an experienced technology professional, you will be responsible for: Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs Providing technical guidance and troubleshooting support throughout project delivery Collaborating with stakeholders to gather requirements and propose effective migration strategies Acting as a trusted advisor to customers on industry trends and emerging technologies Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About The Team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Basic Qualifications Experience in cloud architecture and implementation Bachelor's degree in Computer Science, Engineering, related field, or equivalent experience Proven track record in designing and developing end-to-end Machine Learning and Generative AI solutions, from conception to deployment Experience in applying best practices and evaluating alternative and complementary ML and foundational models suitable for given business contexts Foundational knowledge of data modeling principles, statistical analysis methodologies, and demonstrated ability to extract meaningful insights from complex, large-scale datasets Experience in mentoring junior team members, and guiding them on machine learning and data modeling applications Preferred Qualifications AWS experience preferred, with proficiency in a wide range of AWS services (e.g., Bedrock, SageMaker, EC2, S3, Lambda, IAM, VPC, CloudFormation) AWS Professional level certifications (e.g., Machine Learning Speciality, Machine Learning Engineer Associate, Solutions Architect Professional) preferred Experience with automation and scripting (e.g., Terraform, Python) Knowledge of security and compliance standards (e.g., HIPAA, GDPR) Strong communication skills with the ability to explain technical concepts to both technical and non-technical audiences Experience in developing and optimizing foundation models (LLMs), including fine-tuning, continuous training, small language model development, and implementation of Agentic AI systems Experience in developing and deploying end-to-end machine learning and deep learning solutions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - AWS ProServe IN - Karnataka Job ID: A2941027 Show more Show less

Posted 2 months ago

Apply

4.0 - 6.0 years

0 Lacs

India

On-site

The proliferation of machine log data has the potential to give organizations unprecedented real-time visibility into their infrastructure and operations. With this opportunity comes tremendous technical challenges around ingesting, managing, and understanding high-volume streams of heterogeneous data. As a Machine Learning Engineer at Sumo Logic, you will actively contribute in the design and development of innovative ML-powered product capabilities to help our customers make sense of their huge amounts of log data. This involves working through the entire feature lifecycle including ideation, dataset construction, experimental validation, prototyping, production implementation, deployment, and operations. Responsibilities Identifying and validating opportunities for the application of ML or data-driven techniques Assessing requirements and approaches for large-scale data and ML platform components Driving technical delivery through the full feature lifecycle, from idea to production and operations Helping the team design and implement extremely high-volume, fault-tolerant, scalable backend systems that process and manage petabytes of customer data. Collaborating within and beyond the team to identify problems and deliver solutions Work as a member of a team, helping the team respond quickly and effectively to business needs. Requirements B.Tech, M.Tech, or Ph.D. in Computer Science or related discipline 4-6 years of industry experience with a proven track record of ownership and delivery Experience formulating use cases as ML problems and putting ML models into production Solid grounding in core ML concepts and basic statistics Experience with software engineering of production-grade services in cloud environments handling data at large scale Desirable Cloud-based application and infrastructure deployment and management Common ML libraries (eg, scikit-learn, PyTorch) and components (eg, Airflow, MLFlow) Relevant cloud provider services (eg, AWS Sagemaker) LLM core concepts, libraries, and application design patterns Experience in multi-threaded programming and distributed systems Agile software development experience (test-driven development, iterative and incremental development) is a plus. About Us Sumo Logic, Inc., empowers the people who power modern, digital business. Sumo Logic enables customers to deliver reliable and secure cloud-native applications through its SaaS analytics platform. The Sumo Logic Continuous Intelligence Platform™ helps practitioners and developers ensure application reliability, secure and protect against modern security threats, and gain insights into their cloud infrastructures. Customers worldwide rely on Sumo Logic to get powerful real-time analytics and insights across observability and security solutions for their cloud-native applications. For more information, visit www.sumologic.com. Show more Show less

Posted 2 months ago

Apply

0 years

0 Lacs

Hyderābād

On-site

Role – AIML Data Scientist Location : Coimbatore Mode of Interview - In Person Job Description: 1. Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges a. Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem b. Improve Model accuracy to deliver greater business impact c. Estimate business impact due to deployment of model 2. Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge 3. Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch 5. Experience in using Deep learning models with text, speech, image and video data a. Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc b. Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV c. Knowledge of State of the art Deep learning algorithms 6. Optimize and tune Deep Learnings model for best possible accuracy 7. Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau 8. Work with application teams, in deploying models on cloud as a service or on-prem a. Deployment of models in Test / Control framework for tracking b. Build CI/CD pipelines for ML model deployment 9. Integrating AI&ML models with other applications using REST APIs and other connector technologies 10. Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies