Home
Jobs

661 Sagemaker Jobs - Page 10

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us . Job Description At Bristol Myers Squibb, we are inspired by a single vision - transforming patients' lives through science. In oncology, hematology, immunology and cardiovascular disease - and one of the most diverse and promising pipelines in the industry - each of our passionate colleagues contribute to innovations that drive meaningful change. We bring a human touch to every treatment we pioneer. Join us and make a difference. Data Scientists will work with senior DS leaders to extract insight from complex clinical, translational and real-world data. They will have the opportunity to work on complex data science problems including modeling techniques to interpret, infer and recommend based on insights from data. Roles And Responsibilities Formulates, implements, tests, and validates predictive models and implements efficient automated processes for producing modeling results at scale. Creates robust models based on statistical and data mining techniques to provide insights and recommendations based on large complex data sets. Presents stories told by data in a visually appealing and easy to understand manner. Responsible for collaborating with cross functional teams, including but not limited to, clinicians, data scientist, translational medicine scientist, statisticians, and IT professionals. Proactively builds partnerships with specialist functions and global counterparts to maximize knowledge and available resources. Requirements Ph.D. in quantitative sciences (computer science, math, statistics and engineering) +2 years experience in healthcare or pharmaceutical is preferred but not required Strong knowledge of programing languages, with a focus on machine learning (R, Python, Sagemaker, Tensorflow). Ability to summarize technically/analytically complex information for a non-technical audience Demonstrated ability to work in a team environment with good interpersonal, communication, writing and organizational skills. Outstanding technical and analytic skills, proficient at understanding and conceptualizing business problems and implementing analytic or decision support solutions Experience in working with popular deep learning and transformer frameworks. Ability to code in Tensorflow or Pytorch Experience in fine tuning LLM, developing RAGs systems, training SLM Expertise in supervised and unsupervised machine learning algorithms Experience in working with clinical and translational data (omics, flow cytometry and images) Around the world, we are passionate about making an impact on the lives of patients with serious diseases. Empowered to apply our individual talents and diverse perspectives in an inclusive culture, our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Physical presence at the BMS worksite or physical presence in the field is an essential job function of this role which the Company deems critical to collaboration, innovation, productivity, employee well-being and engagement, and enhances the Company culture. To protect the safety of our workforce, customers, patients and communities, the policy of the Company requires all employees and workers in the U.S. and Puerto Rico to be fully vaccinated against COVID-19, unless they have received an exception based on an approved request for a medical or religious reasonable accommodation. Therefore, all BMS applicants seeking a role located in the U.S. and Puerto Rico must confirm that they have already received or are willing to receive the full COVID-19 vaccination by their start date as a qualification of the role and condition of employment. This requirement is subject to state and local law restrictions and may not be applicable to employees working in certain jurisdictions such as Montana. This requirement is also subject to discussions with collective bargaining representatives in the U.S. Our company is committed to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace adjustments and ongoing support in their roles. Applicants can request an approval of accommodation prior to accepting a job offer. If you require reasonable accommodation in completing this application or if you are applying to a role based in the U.S. or Puerto Rico and you believe that you are unable to receive a COVID-19 vaccine due to a medical condition or sincerely held religious belief, during or any part of the recruitment process, please direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/eeo-accessibility to access our complete Equal Employment Opportunity statement. Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations. If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information https //careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations. Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

10 - 15 Lacs

Kochi

Remote

Naukri logo

We are looking for a skilled AWS Cloud Engineer with a minimum of 5 years of hands-on experience in managing and implementing cloud-based solutions on AWS. The ideal candidate will have expertise in AWS core services such as S3, EC2, MSK, Glue, DMS, and SageMaker, along with strong programming and containerization skills using Python and Docker.Design, implement, and manage scalable AWS cloud infrastructure solutions. Hands-on experience with AWS services: S3, EC2, MSK, Glue, DMS, and SageMaker. Develop, deploy, and maintain Python-based applications in cloud environments. Containerize applications using Docker and manage deployment pipelines. Troubleshoot infrastructure and application issues, review designs, and code solutions. Ensure high availability, performance, and security of cloud resources. Collaborate with cross-functional teams to deliver reliable and scalable solutions.

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Our Company Changing the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Key Responsibilities Platform Development and Evangelism: Build scalable AI platforms that are customer-facing. Evangelize the platform with customers and internal stakeholders. Ensure platform scalability, reliability, and performance to meet business needs.  Machine Learning Pipeline Design: Design ML pipelines for experiment management, model management, feature management, and model retraining. Implement A/B testing of models. Design APIs for model inferencing at scale. Proven expertise with MLflow, SageMaker, Vertex AI, and Azure AI. LLM Serving and GPU Architecture: Serve as an SME in LLM serving paradigms. Possess deep knowledge of GPU architectures. Expertise in distributed training and serving of large language models. Proficient in model and data parallel training using frameworks like DeepSpeed and service frameworks like vLLM. Model Fine-Tuning and Optimization: Demonstrate proven expertise in model fine-tuning and optimization techniques. Achieve better latencies and accuracies in model results. Reduce training and resource requirements for fine-tuning LLM and LVM models. LLM Models and Use Cases: Have extensive knowledge of different LLM models. Provide insights on the applicability of each model based on use cases. Proven experience in delivering end-to-end solutions from engineering to production for specific customer use cases. DevOps and LLMOps Proficiency: Proven expertise in DevOps and LLMOps practices. Knowledgeable in Kubernetes, Docker, and container orchestration. Deep understanding of LLM orchestration frameworks like Flowise, Langflow, and Langgraph. Skill Matrix LLM: Hugging Face OSS LLMs, GPT, Gemini, Claude, Mixtral, Llama LLM Ops: ML Flow, Langchain, Langraph, LangFlow, Flowise, LLamaIndex, SageMaker, AWS Bedrock, Vertex AI, Azure AI Databases/Datawarehouse: DynamoDB, Cosmos, MongoDB, RDS, MySQL, PostGreSQL, Aurora, Spanner, Google BigQuery. Cloud Knowledge: AWS/Azure/GCP Dev Ops (Knowledge): Kubernetes, Docker, FluentD, Kibana, Grafana, Prometheus Cloud Certifications (Bonus): AWS Professional Solution Architect, AWS Machine Learning Specialty, Azure Solutions Architect Expert Proficient in Python, SQL, Javascript Adobe is proud to be an Equal Employment Opportunity employer. We do not discriminate based on gender, race or color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. Learn more. Adobe aims to make Adobe.com accessible to any and all users. If you have a disability or special need that requires accommodation to navigate our website or complete the application process, email accommodations@adobe.com or call (408) 536-3015. Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 12+ years of hands on experience Position: Senior Manager Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Deep expertise in AI/ML solution design, including supervised and unsupervised learning, deep learning, NLP, and optimization. Strong hands-on experience with ML/DL frameworks like TensorFlow, PyTorch, scikit-learn, H2O, and XGBoost. Solid programming skills in Python, PySpark, and SQL, with a strong foundation in software engineering principles. Proven track record of building end-to-end AI pipelines, including data ingestion, model training, testing, and production deployment. Experience with MLOps tools such as MLflow, Airflow, DVC, and Kubeflow for model tracking, versioning, and monitoring. Understanding of big data technologies like Apache Spark, Hive, and Delta Lake for scalable model development. Expertise in AI solution deployment across cloud platforms like GCP, AWS, and Azure using services like Vertex AI, SageMaker, and Azure ML. Experience in REST API development, NoSQL database design, and RDBMS design and optimizations. Familiarity with API-based AI integration and containerization technologies like Docker and Kubernetes. Proficiency in data storytelling and visualization tools such as Tableau, Power BI, Looker, and Streamlit. Programming skills in Python and either Scala or R, with experience using Flask and FastAPI. Experience with software engineering practices, including use of GitHub, CI/CD, code testing, and analysis. Proficient in using AI/ML frameworks such as TensorFlow, PyTorch, and SciKit-Learn. Skilled in using Apache Spark, including PySpark and Databricks, for big data processing. Strong understanding of foundational data science concepts, including statistics, linear algebra, and machine learning principles. Knowledgeable in integrating DevOps, MLOps, and DataOps practices to enhance operational efficiency and model deployment. Experience with cloud infrastructure services like Azure and GCP. Proficiency in containerization technologies such as Docker and Kubernetes. Familiarity with observability and monitoring tools like Prometheus and the ELK stack, adhering to SRE principles and techniques. Cloud or Data Engineering certifications or specialization certifications (e.g. Google Professional Machine Learning Engineer, Microsoft Certified: Azure AI Engineer Associate – Exam AI-102, AWS Certified Machine Learning – Specialty (MLS-C01), Databricks Certified Machine Learning) Nice To Have Experience implementing generative AI, LLMs, or advanced NLP use cases Exposure to real-time AI systems, edge deployment, or federated learning Strong executive presence and experience communicating with senior leadership or CXO-level clients Roles And Responsibilities Lead and oversee complex AI/ML programs, ensuring alignment with business strategy and delivering measurable outcomes. Serve as a strategic advisor to clients on AI adoption, architecture decisions, and responsible AI practices. Design and review scalable AI architectures, ensuring performance, security, and compliance. Supervise the development of machine learning pipelines, enabling model training, retraining, monitoring, and automation. Present technical solutions and business value to executive stakeholders through impactful storytelling and data visualization. Build, mentor, and lead high-performing teams of data scientists, ML engineers, and analysts. Drive innovation and capability development in areas such as generative AI, optimization, and real-time analytics. Contribute to business development efforts, including proposal creation, thought leadership, and client engagements. Partner effectively with cross-functional teams to develop, operationalize, integrate, and scale new algorithmic products. Develop code, CI/CD, and MLOps pipelines, including automated tests, and deploy models to cloud compute endpoints. Manage cloud resources and build accelerators to enable other engineers, with experience in working across two hyperscale clouds. Demonstrate effective communication skills, coaching and leading junior engineers, with a successful track record of building production-grade AI products for large organizations. Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience Position: Senior Associate Industry: Supply Chain/Forecasting/Financial Analytics Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Strong supply chain domain knowledge (inventory planning, demand forecasting, logistics) Well versed and hands-on experience of working on optimization methods like linear programming, mixed integer programming, scheduling optimization. Having understanding of working on third party optimization solvers like Gurobi will be an added advantage Proficiency in forecasting techniques (e.g., Holt-Winters, ARIMA, ARIMAX, SARIMA, SARIMAX, FBProphet, NBeats) and machine learning techniques (supervised and unsupervised) Experience using at least one major cloud platform (AWS, Azure, GCP), such as: AWS: Experience with AWS SageMaker, Redshift, Glue, Lambda, QuickSight Azure: Experience with Azure ML Studio, Synapse Analytics, Data Factory, Power BI GCP: Experience with BigQuery, Vertex AI, Dataflow, Cloud Composer, Looker Experience developing, deploying, and monitoring ML models on cloud infrastructure Expertise in Python, SQL, data orchestration, and cloud-native data tools Hands-on experience with cloud-native data lakes and lakehouses (e.g., Delta Lake, BigLake) Familiarity with infrastructure-as-code (Terraform/CDK) for cloud provisioning Knowledge of visualization tools (PowerBI, Tableau, Looker) integrated with cloud backends Strong command of statistical modeling, testing, and inference Advanced capabilities in data wrangling, transformation, and feature engineering Familiarity with MLOps, containerization (Docker, Kubernetes), and orchestration tools (e.g., Airflow) Strong communication and stakeholder engagement skills at the executive level Roles And Responsibilities Assist analytics projects within the supply chain domain, driving design, development, and delivery of data science solutions Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity like data quality, model robustness, and explainability for deployments. Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

India

On-site

Linkedin logo

About Oportun Oportun (Nasdaq: OPRT) is a mission-driven fintech that puts its 2.0 million members' financial goals within reach. With intelligent borrowing, savings, and budgeting capabilities, Oportun empowers members with the confidence to build a better financial future. Since inception, Oportun has provided more than $16.6 billion in responsible and affordable credit, saved its members more than $2.4 billion in interest and fees, and helped its members save an average of more than $1,800 annually. Oportun has been certified as a Community Development Financial Institution (CDFI) since 2009. WORKING AT OPORTUN Working at Oportun means enjoying a differentiated experience of being part of a team that fosters a diverse, equitable and inclusive culture where we all feel a sense of belonging and are encouraged to share our perspectives. This inclusive culture is directly connected to our organization's performance and ability to fulfill our mission of delivering affordable credit to those left out of the financial mainstream. We celebrate and nurture our inclusive culture through our employee resource groups. Company Overview At Oportun, we are on a mission to foster financial inclusion for all by providing affordable and responsible lending solutions to underserved communities. As a purpose-driven financial technology company, we believe in empowering our customers with access to responsible credit that can positively transform their lives. Our relentless commitment to innovation and data-driven practices has positioned us as a leader in the industry, and we are actively seeking exceptional individuals to join our team as Senior Software Engineer to play a critical role in driving positive change. Position Overview We are seeking a highly skilled Platform Engineer with expertise in building self-serve platforms that combine real-time ML deployment and advanced data engineering capabilities. This role requires a blend of cloud-native platform engineering, data pipeline development, and deployment expertise. The ideal candidate will have a strong background in implementing data workflows, building platforms to enable self-serve for ML pipelines while enabling seamless deployments. Responsibilities Platform Engineering Design and build self-serve platforms that support real-time ML deployment and robust data engineering workflows. Create APIs and backend services using Python and FastAPI to manage and monitor ML workflows and data pipelines. Real-Time ML Deployment Implement platforms for real-time ML inference using tools like AWS SageMaker and Databricks. Enable model versioning, monitoring, and lifecycle management with observability tools such as New Relic. Data Engineering Build and optimise ETL/ELT pipelines for data preprocessing, transformation, and storage using PySpark and Pandas. Develop and manage feature stores to ensure consistent, high-quality data for ML model training and deployment. Design scalable, distributed data pipelines on platforms like AWS, integrating tools such as DynamoDB, PostgreSQL, MongoDB, and MariaDB. CI/CD and Automation Use CI/CD pipelines using Jenkins, GitHub Actions, and other tools for automated deployments and testing. Automate data validation and monitoring processes to ensure high-quality and consistent data workflows. Documentation and Collaboration Create and maintain detailed technical documentation, including high-level and low-level architecture designs. Collaborate with cross-functional teams to gather requirements and deliver solutions that align with business goals. Participate in Agile processes such as sprint planning, daily standups, and retrospectives using tools like Jira. Experience Required Qualifications 5-10 years experience in IT 5-8 years experience in platform backend engineering 1 year experience in DevOps & data engineering roles. Hands-on experience with real-time ML model deployment and data engineering workflows. Technical Skills Strong expertise in Python and experience with Pandas, PySpark, and FastAPI. Proficiency in container orchestration tools such as Kubernetes (K8s) and Docker. Advanced knowledge of AWS services like SageMaker, Lambda, DynamoDB, EC2, and S3. Proven experience building and optimizing distributed data pipelines using Databricks and PySpark. Solid understanding of databases such as MongoDB, DynamoDB, MariaDB, and PostgreSQL. Proficiency with CI/CD tools like Jenkins, GitHub Actions, and related automation frameworks. Hands-on experience with observability tools like New Relic for monitoring and troubleshooting. We are proud to be an Equal Opportunity Employer and consider all qualified applicants for employment opportunities without regard to race, age, color, religion, gender, national origin, disability, sexual orientation, veteran status or any other category protected by the laws or regulations in the locations where we operate. California applicants can find a copy of Oportun's CCPA Notice here: https://oportun.com/privacy/california-privacy-notice/. We will never request personal identifiable information (bank, credit card, etc.) before you are hired. We do not charge you for pre-employment fees such as background checks, training, or equipment. If you think you have been a victim of fraud by someone posing as us, please report your experience to the FBI’s Internet Crime Complaint Center (IC3). Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Solutions Architect / Technical Lead - AI & Automation1 Key Responsibilities Solution Architecture & Development: Design end-to-end solutions using Node.JS (backend) and Vue.JS (frontend) for custom portals and administration interfaces. Integrate Azure AI services, Google OCR, and Azure OCR into client workflows. AI/ML Engineering Develop and optimize vision-based AI models (Layout Parsing/LP, Layout Inference/LI, Layout Transformation/LT) using Python. Implement NLP pipelines for document extraction, classification, and data enrichment. Cloud & Database Management Architect and optimize MongoDB databases hosted on Azure for scalability, security, and performance. Manage cloud infrastructure (Azure) for AI workloads, including containerization and serverless deployments. Technical Leadership Lead cross-functional teams (AI engineers, DevOps, BAs) in solution delivery. Troubleshoot complex technical issues in OCR accuracy, AI model drift, or system integration. Client Enablement Advise clients on technical best practices for scaling AI solutions. Document architectures, conduct knowledge transfers, and mentor junior engineers. Required Technical Expertise Frontend/Portal: Vue.JS (advanced components, state management), Node.JS (Express, REST/GraphQL APIs). AI/ML Stack: Python (PyTorch/TensorFlow), Azure AI (Cognitive Services, Computer Vision), NLP techniques (NER, summarization). Layout Engineering: LP/LI/LT for complex documents (invoices, contracts). OCR Technologies: Production experience with Google Vision OCR and Azure Form Recognizer. Database & Cloud: MongoDB (sharding, aggregation, indexing) hosted on Azure (Cosmos DB, Blob Storage, AKS). Infrastructure-as-Code (Terraform/Bicep), CI/CD pipelines (Azure DevOps). Experience: 10+ years in software development, including 5+ years specializing in AI/ML, OCR, or document automation. Proven track record deploying enterprise-scale solutions in cloud environments (Azure preferred). Preferred Qualifications Certifications: Azure Solutions Architect Expert, MongoDB Certified Developer, or Google Cloud AI/ML. Experience with alternative OCR tools (ABBYY, Tesseract) or AI platforms (GCP Vertex AI, AWS SageMaker). Knowledge of DocuSign CLM, Coupa, or SAP Ariba integrations. Familiarity with Kubernetes, Docker, and MLOps practices. Show more Show less

Posted 1 week ago

Apply

10.0 years

2 - 7 Lacs

Hyderābād

On-site

Key Responsibilities Solution Architecture & Development: Design end-to-end solutions using Node.JS (backend) and Vue.JS (frontend) for custom portals and administration interfaces. Integrate Azure AI services , Google OCR , and Azure OCR into client workflows. AI/ML Engineering: Develop and optimize vision-based AI models ( Layout Parsing/LP, Layout Inference/LI, Layout Transformation/LT ) using Python . Implement NLP pipelines for document extraction, classification, and data enrichment. Cloud & Database Management: Architect and optimize MongoDB databases hosted on Azure for scalability, security, and performance. Manage cloud infrastructure (Azure) for AI workloads, including containerization and serverless deployments. Technical Leadership: Lead cross-functional teams (AI engineers, DevOps, BAs) in solution delivery. Troubleshoot complex technical issues in OCR accuracy, AI model drift, or system integration. Client Enablement: Advise clients on technical best practices for scaling AI solutions. Document architectures, conduct knowledge transfers, and mentor junior engineers. Required Technical Expertise Frontend/Portal: Vue.JS (advanced components, state management), Node.JS (Express, REST/GraphQL APIs). AI/ML Stack: Python (PyTorch/TensorFlow), Azure AI (Cognitive Services, Computer Vision), NLP techniques (NER, summarization). Layout Engineering : LP/LI/LT for complex documents (invoices, contracts). OCR Technologies: Production experience with Google Vision OCR and Azure Form Recognizer . Database & Cloud: MongoDB (sharding, aggregation, indexing) hosted on Azure (Cosmos DB, Blob Storage, AKS). Infrastructure-as-Code (Terraform/Bicep), CI/CD pipelines (Azure DevOps). Experience: 10+ years in software development, including 5+ years specializing in AI/ML, OCR, or document automation . Proven track record deploying enterprise-scale solutions in cloud environments (Azure preferred). Preferred Qualifications Certifications: Azure Solutions Architect Expert , MongoDB Certified Developer , or Google Cloud AI/ML. Experience with alternative OCR tools (ABBYY, Tesseract) or AI platforms (GCP Vertex AI, AWS SageMaker). Knowledge of DocuSign CLM , Coupa , or SAP Ariba integrations. Familiarity with Kubernetes , Docker , and MLOps practices.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

We are seeking a highly skilled AWS MLOps Engineer with a overall experience 5 years with 3 years as ML Engineer particularly in building and managing ML pipelines in AWS. The ideal candidate has successfully built and deployed at least two MLOps projects using Amazon SageMaker or similar services, with a strong foundation in infrastructure as code and a keen understanding of MLOps best practices. Key Responsibilities: Maintain and enhance existing ML pipelines in AWS with a focus on Infrastructure as Code using CloudFormation. Implement minimal but essential pipeline extensions to support ongoing data science workstreams. Document infrastructure usage, architecture, and design using tools like Confluence, GitHub Wikis, and system diagrams. Act as the internal infrastructure expert, collaborating with data scientists to guide and support model deployments. Research and implement optimization strategies for ML workflows and infrastructure. Work independently and collaboratively with cross-functional teams to support ML product deployment and re-platforming initiatives. Qualifications 5+ years of hands-on DevOps experience with AWS Cloud. Proven experience with at least two MLOps projects deployed using SageMaker or similar AWS services. Strong proficiency in AWS services: SageMaker, ECR, S3, Lambda, Step Functions. Expertise in Infrastructure as Code using CloudFormation for dev/test/prod environments. Solid understanding of MLOps best practices and Data Science principles. Proficient in Python for scripting and automation. Experience building and managing Docker images. Hands-on experience with Git-based version control systems such as AWS CodeCommit or GitHub, including GitHub Actions for CI/CD pipelines. Job Type: Full-time Schedule: Day shift Experience: MLOps: 3 years (Required) SageMaker: 2 years (Required) Work Location: In person

Posted 1 week ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

About the Role: Grade Level (for internal use): 10 S&P Global Commodity Insights The Role: Senior Cloud Engineer The Location: Hyderabad, Gurgaon The Team: The Cloud Engineering Team is responsible for designing, implementing, and maintaining cloud infrastructure that supports various applications and services within the S&P Global Commodity Insights organization. This team collaborates closely with data science, application development, and security teams to ensure the reliability, security, and scalability of our cloud solutions. The Impact: As a Cloud Engineer, you will play a vital role in deploying and managing cloud infrastructure that supports our strategic initiatives. Your expertise in AWS and cloud technologies will help streamline operations, enhance service delivery, and ensure the security and compliance of our environments. What’s in it for you: This position offers the opportunity to work on cutting-edge cloud technologies and collaborate with various teams across the organization. You will gain exposure to multiple S&P Commodity Insights Divisions and contribute to projects that have a significant impact on the business. This role opens doors for tremendous career opportunities within S&P Global. Responsibilities: Design and deploy cloud infrastructure using core AWS services such as EC2, S3, RDS, IAM, VPC, and CloudFront, ensuring high availability and fault tolerance. Deploy, manage, and scale Kubernetes clusters using Amazon EKS, ensuring high availability, secure networking, and efficient resource utilization. Develop secure, compliant AWS environments by configuring IAM roles/policies, KMS encryption, security groups, and VPC endpoints. Configure logging, monitoring, and alerting with CloudWatch, CloudTrail, and GuardDuty to support observability and incident response. Enforce security and compliance controls via IAM policy audits, patching schedules, and automated backup strategies. Monitor infrastructure health, respond to incidents, and maintain SLAs through proactive alerting and runbook execution. Collaborate with data science teams to deploy machine learning models using Amazon SageMaker, managing model training, hosting, and monitoring. Automate and schedule data processing workflows using AWS Glue, Step Functions, Lambda, and EventBridge to support ML pipelines. Optimize infrastructure for cost and performance using AWS Compute Optimizer, CloudWatch metrics, auto-scaling, and Reserved Instances/Savings Plans. Write and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation for repeatable, automated infrastructure deployments. Implement disaster recovery, backups, and versioned deployments using S3 versioning, RDS snapshots, and CloudFormation change sets. Set up and manage CI/CD pipelines using AWS services like CodePipeline, CodeBuild, and CodeDeploy to support application and model deployments. Manage and optimize real-time inference pipelines using SageMaker Endpoints, Amazon Bedrock, and Lambda with API Gateway to ensure reliable, scalable model serving. Support containerized AI workloads using Amazon ECS or EKS, including model serving and microservices for AI-based features. Collaborate with SecOps and SRE teams to uphold security baselines, manage change control, and conduct root cause analysis for outages. Participate in code reviews, design discussions, and architectural planning to ensure scalable and maintainable cloud infrastructure. Maintain accurate and up-to-date infrastructure documentation, including architecture diagrams, access control policies, and deployment processes. Collaborate cross-functionally with application, data, and security teams to align cloud solutions with business and technical goals. Stay current with AWS and AI/ML advancements, suggesting improvements or new service adoption where applicable. What We’re Looking For: Strong understanding of cloud infrastructure, particularly AWS services and Kubernetes. Proven experience in deploying and managing cloud solutions in a collaborative Agile environment. Ability to present technical concepts to both business and technical audiences. Excellent multi-tasking skills and the ability to manage multiple projects under tight deadlines. Basic Qualifications: BA/BS in computer science, information technology, or a related field. 5+ years of experience in cloud engineering or related roles, specifically with AWS. Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Knowledge of container orchestration and microservices architecture. Familiarity with security best practices in cloud environments. Preferred Qualifications: Extensive Hands-on Experience with AWS Services. Excellent problem-solving skills and the ability to work independently as well as part of a team. Strong communication skills and the ability to influence stakeholders at all levels. Experience with greenfield projects and building cloud infrastructure from scratch. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315801 Posted On: 2025-06-05 Location: Hyderabad, Telangana, India

Posted 1 week ago

Apply

1.0 years

0 - 0 Lacs

India

On-site

Responsibilities: Design and implement scalable, maintainable, and high-performance software systems. Define architecture for AI-enabled applications and services. Develop and deploy machine learning models using Python libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Evaluate model performance and fine-tune algorithms based on business requirements. Integrate AI models into production environments. Write clean, modular, and efficient Python code. Use Python for backend APIs, data processing pipelines, and scripting automation. Utilize frameworks like Django, Flask, or FastAPI when needed. Design and manage data pipelines and ETL processes for AI applications. Work with databases (SQL and NoSQL), APIs, and cloud storage systems. Conduct code reviews to ensure adherence to best practices and coding standards. Maintain and improve existing codebases. Build and train custom models for NLP, computer vision, recommendation systems, etc. Perform exploratory data analysis and feature engineering. Stay updated with the latest AI research and implement new techniques Propose innovative AI-driven solutions to business problems. Mentor junior developers and guide them in AI and software development. Lead technical discussions and peer-learning sessions. Collaborate with product managers, designers, and other stakeholders to define requirements and timelines. Break down complex tasks into manageable deliverables and delegate accordingly. Participate in agile ceremonies (daily standups, sprint planning, retrospectives). Ensure timely delivery of project milestones. Deploy AI services using cloud platforms (AWS, Azure, GCP). Use Docker, Kubernetes, CI/CD tools for automated deployment and scalability. Python Libraries: NumPy, Pandas, TensorFlow, PyTorch, Scikit-learn, OpenCV, NLTK, SpaCy Web: Flask, FastAPI, Django Data/Storage: PostgreSQL, MongoDB, Redis, S3 DevOps: Docker, Git, Jenkins, Kubernetes Cloud: AWS SageMaker, Azure ML, Google AI Platform Job Types: Full-time, Permanent Pay: ₹35,000.00 - ₹40,000.00 per month Benefits: Paid sick time Schedule: Day shift Weekend availability Supplemental Pay: Yearly bonus Experience: total work: 1 year (Required) Work Location: In person Application Deadline: 22/06/2025 Expected Start Date: 25/06/2025

Posted 1 week ago

Apply

5.0 years

0 Lacs

Gurgaon

On-site

About the Role: Grade Level (for internal use): 10 S&P Global Commodity Insights The Role: Senior Cloud Engineer The Location: Hyderabad, Gurgaon The Team: The Cloud Engineering Team is responsible for designing, implementing, and maintaining cloud infrastructure that supports various applications and services within the S&P Global Commodity Insights organization. This team collaborates closely with data science, application development, and security teams to ensure the reliability, security, and scalability of our cloud solutions. The Impact: As a Cloud Engineer, you will play a vital role in deploying and managing cloud infrastructure that supports our strategic initiatives. Your expertise in AWS and cloud technologies will help streamline operations, enhance service delivery, and ensure the security and compliance of our environments. What’s in it for you: This position offers the opportunity to work on cutting-edge cloud technologies and collaborate with various teams across the organization. You will gain exposure to multiple S&P Commodity Insights Divisions and contribute to projects that have a significant impact on the business. This role opens doors for tremendous career opportunities within S&P Global. Responsibilities: Design and deploy cloud infrastructure using core AWS services such as EC2, S3, RDS, IAM, VPC, and CloudFront, ensuring high availability and fault tolerance. Deploy, manage, and scale Kubernetes clusters using Amazon EKS, ensuring high availability, secure networking, and efficient resource utilization. Develop secure, compliant AWS environments by configuring IAM roles/policies, KMS encryption, security groups, and VPC endpoints. Configure logging, monitoring, and alerting with CloudWatch, CloudTrail, and GuardDuty to support observability and incident response. Enforce security and compliance controls via IAM policy audits, patching schedules, and automated backup strategies. Monitor infrastructure health, respond to incidents, and maintain SLAs through proactive alerting and runbook execution. Collaborate with data science teams to deploy machine learning models using Amazon SageMaker, managing model training, hosting, and monitoring. Automate and schedule data processing workflows using AWS Glue, Step Functions, Lambda, and EventBridge to support ML pipelines. Optimize infrastructure for cost and performance using AWS Compute Optimizer, CloudWatch metrics, auto-scaling, and Reserved Instances/Savings Plans. Write and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation for repeatable, automated infrastructure deployments. Implement disaster recovery, backups, and versioned deployments using S3 versioning, RDS snapshots, and CloudFormation change sets. Set up and manage CI/CD pipelines using AWS services like CodePipeline, CodeBuild, and CodeDeploy to support application and model deployments. Manage and optimize real-time inference pipelines using SageMaker Endpoints, Amazon Bedrock, and Lambda with API Gateway to ensure reliable, scalable model serving. Support containerized AI workloads using Amazon ECS or EKS, including model serving and microservices for AI-based features. Collaborate with SecOps and SRE teams to uphold security baselines, manage change control, and conduct root cause analysis for outages. Participate in code reviews, design discussions, and architectural planning to ensure scalable and maintainable cloud infrastructure. Maintain accurate and up-to-date infrastructure documentation, including architecture diagrams, access control policies, and deployment processes. Collaborate cross-functionally with application, data, and security teams to align cloud solutions with business and technical goals. Stay current with AWS and AI/ML advancements, suggesting improvements or new service adoption where applicable. What We’re Looking For: Strong understanding of cloud infrastructure, particularly AWS services and Kubernetes. Proven experience in deploying and managing cloud solutions in a collaborative Agile environment. Ability to present technical concepts to both business and technical audiences. Excellent multi-tasking skills and the ability to manage multiple projects under tight deadlines. Basic Qualifications: BA/BS in computer science, information technology, or a related field. 5+ years of experience in cloud engineering or related roles, specifically with AWS. Experience with Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Knowledge of container orchestration and microservices architecture. Familiarity with security best practices in cloud environments. Preferred Qualifications: Extensive Hands-on Experience with AWS Services. Excellent problem-solving skills and the ability to work independently as well as part of a team. Strong communication skills and the ability to influence stakeholders at all levels. Experience with greenfield projects and building cloud infrastructure from scratch. About S&P Global Commodity Insights At S&P Global Commodity Insights, our complete view of global energy and commodities markets enables our customers to make decisions with conviction and create long-term, sustainable value. We’re a trusted connector that brings together thought leaders, market participants, governments, and regulators to co-create solutions that lead to progress. Vital to navigating Energy Transition, S&P Global Commodity Insights’ coverage includes oil and gas, power, chemicals, metals, agriculture and shipping. S&P Global Commodity Insights is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit http://www.spglobal.com/commodity-insights . What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 315801 Posted On: 2025-06-05 Location: Hyderabad, Telangana, India

Posted 1 week ago

Apply

1.0 years

3 - 10 Lacs

Bengaluru

On-site

This role is for one of our clients Industry: Technology, Information and Media Seniority level: Associate level Min Experience: 1 years Location: Bengaluru JobType: full-time We’re looking for a motivated and innovative AI Developer with 1 to 3 years of experience to join our expanding AI team. This role is perfect for those eager to apply AI and machine learning techniques to build impactful, scalable solutions that drive real business value. You will collaborate closely with data scientists, developers, and product owners to bring intelligent capabilities into our products and services. What You’ll Be Doing Develop and deploy AI and machine learning models tailored to solve practical, complex challenges across various business domains. Partner with cross-functional teams to understand project goals, translate requirements into technical solutions, and deliver AI-enhanced features. Utilize popular ML frameworks like TensorFlow, PyTorch, and scikit-learn to build models for prediction, classification, recommendation, and more. Design and optimize deep learning architectures, leveraging large and diverse datasets for maximum performance. Build automated model training, validation, and deployment pipelines to ensure reliability and scalability in production environments. Continuously monitor model outputs, analyze performance metrics, and fine-tune algorithms to improve accuracy and efficiency. Keep abreast of emerging AI trends and research, experimenting with new tools and techniques to enhance current workflows. Collaborate on cloud-based AI infrastructure using AWS, Azure, or Google Cloud to deploy and scale models. Write clear, maintainable code with proper documentation and adhere to software engineering best practices. Who You Are A Bachelor’s or Master’s graduate in Computer Science, AI, Data Science, or a related technical field. 1–3 years of practical experience developing machine learning or AI solutions in a professional setting. Skilled in Python programming with hands-on experience in ML libraries such as TensorFlow, PyTorch, Keras, or scikit-learn. Solid grasp of core concepts in algorithms, data structures, and statistics as they apply to AI and ML. Exposure to specialized domains such as natural language processing (NLP), computer vision, or recommendation systems is highly desirable. Experience in data preprocessing, feature engineering, and robust model evaluation methodologies. Familiarity with software development lifecycle including version control (Git), automated testing, and CI/CD pipelines. Experience working with cloud ML platforms such as Amazon SageMaker, Google AI Platform, or Azure Machine Learning is a plus. Strong analytical mindset and problem-solving skills, with meticulous attention to detail. Excellent communication skills, comfortable collaborating across technical and non-technical teams.

Posted 1 week ago

Apply

3.0 - 6.0 years

0 - 0 Lacs

Salem

On-site

Job description About The Role: As a Subject Matter Expert (SME) in Data Annotation, you will play a critical role in ensuring the highest quality of data labelling across various projects. Technical and Domain expert Mentor annotation teams Establish annotation guidelines Conduct quality audits Support client and internal teams with domain-specific insights. Tools Experience Expected: CVAT, Amazon SageMaker, BasicAI, LabelStudio, SuperAnnotate, Loft, Cogito, Roboflow, Slicer3D, Mindkosh, Kognic, Praat Annotation Expertise Areas: Image, Video: Bounding Box, Polygon, Semantic Segmentation, Keypoints 3D Point Cloud: LiDAR Annotation, 3D Cuboids, Semantic Segmentation Audio Annotation: Speech, Noise Labelling, Transcription Text Annotation: NER, Sentiment Analysis, Intent Detection, NLP tasks Exposure to LLMs and Generative AI data annotation tasks (prompt generation, evaluation) Key Responsibilities: Act as a Subject Matter Expert to guide annotation standards, processes, and best practices. Create, refine, and maintain detailed annotation guidelines and ensure adherence across teams. Conduct quality audits and reviews to maintain high annotation accuracy and consistency. Provide domain-specific training to Data Annotators and Team Leads. Collaborate closely with Project Managers, Data Scientists, and Engineering teams for dataset quality assurance. Resolve complex annotation issues and edge cases with data-centric solutions. Stay current with advancements in AI/ML and annotation technologies and apply innovative methods. Support pre-sales and client discussions as an annotation domain expert, when required. Key Performance Indicators (KPIs): Annotation quality and consistency across projects Successful training and upskilling of annotation teams Timely resolution of annotation queries and technical challenges Documentation of guidelines, standards Client satisfaction on annotation quality benchmarks Qualifications: Bachelor's or master's degree in a relevant field (Computer Science, AI/ML, Data Science, Linguistics, Engineering, etc.) 3–6 years of hands-on experience in data annotation, with exposure to multiple domains (vision, audio, text, 3D). Deep understanding of annotation processes, tool expertise, and quality standards. Prior experience in quality control, QA audits, or SME role in annotation projects. Strong communication skills to deliver training, documentation, and client presentations. Familiarity with AI/ML workflows, data preprocessing, and dataset management concepts is highly desirable. Work Location: In-person (Salem, Tamil Nadu) Schedule: Day Shift Monday to Saturday Weekend availability required Supplemental Pay: Overtime pays Performance bonus Shift allowance Yearly bonus Languages Required : Tamil(oral communication must),English,Hindi(preffered) Contact :9489979523(HR) Job Type: Full-time Pay: ₹25,000.00 - ₹30,000.00 per month Schedule: Day shift Experience: data annotation: 2 years (Preferred) Work Location: In person Apply Now

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 8 Lacs

Ahmedabad

On-site

Senior Full Stack Developer (Python, JavaScript, AWS, Cloud Services, Azure) Ahmedabad, India; Hyderabad, India Information Technology 315432 Job Description About The Role: Grade Level (for internal use): 10 The Team: S&P Global is a global market leader in providing information, analytics and solutions for industries and markets that drive economies worldwide. The Market Intelligence (MI) division is the largest division within the company. This is an opportunity to join the MI Data and Research’s Data Science Team which is dedicated to developing cutting-edge Data Science and Generative AI solutions. We are a dynamic group that thrives on innovation and collaboration, working together to push the boundaries of technology and deliver impactful solutions. Our team values inclusivity, continuous learning, and the sharing of knowledge to enhance our collective expertise. Responsibilities and Impact: Develop and productionize cloud-based services and full-stack applications utilizing NLP solutions, including GenAI models. Implement and manage CI/CD pipelines to ensure efficient and reliable software delivery. Automate cloud infrastructure using Terraform. Write unit tests, integration tests and performance tests Work in a team environment using agile practices Support administration of Data Science experimentation environment including AWS Sagemaker and Nvidia GPU servers Monitor and optimize application performance and infrastructure costs. Collaborate with data scientists and other developers to integrate and deploy data science models into production environments Educate others to improve and coding standards, code quality and test coverage, documentation Work closely with cross-functional teams to ensure seamless integration and operation of services. What We’re Looking For : Basic Required Qualifications : 5-8 years of experience in software engineering Proficiency in Python and JavaScript for full-stack development. Experience in writing and maintaining high quality code – utilizing techniques like unit testing and code reviews Strong understanding of object-oriented design and programming concepts Strong experience with AWS cloud services, including EKS, Lambda, and S3. Knowledge of Docker containers and orchestration tools including Kubernetes Experience with monitoring, logging, and tracing tools (e.g., Datadog, Kibana, Grafana). Knowledge of message queues and event-driven architectures (e.g., AWS SQS, Kafka). Experience with CI/CD pipelines in Azure DevOps and GitHub Actions. Additional Preferred Qualifications : Experience writing front-end web applications using Javascript and React Familiarity with infrastructure as code (IaC) using Terraform. Experience in Azure or GPC cloud services Proficiency in C# or Java Experience with SQL and NoSQL databases Knowledge of Machine Learning concepts Experience with Large Language Models About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 315432 Posted On: 2025-06-02 Location: Ahmedabad, Gujarat, India

Posted 1 week ago

Apply

7.0 years

5 - 8 Lacs

Jaipur

On-site

ABOUT HAKKODA Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. As an AWS Managed Services Architect, you will play a pivotal role in architecting and optimizing the infrastructure and operations of a complex Data Lake environment for BOT clients. You’ll leverage your strong expertise with AWS services to design, implement, and maintain scalable and secure data solutions while driving best practices. You will work collaboratively with delivery teams across the U.S., Costa Rica, Portugal, and other regions, ensuring a robust and seamless Data Lake architecture. In addition, you’llproactively engage with clients to support their evolving needs, oversee critical AWS infrastructure, and guide teams toward innovative and efficient solutions. This role demands a hands-on approach, including designing solutions, troubleshooting,optimizing performance, and maintaining operational excellence. Role Description AWS Data Lake Architecture: Design, build, and support scalable, high-performance architectures for complex AWS Data Lake solutions. AWS Services Expertise: Deploy and manage cloud-native solutions using a wide range of AWS services, including but not limited to- Amazon EMR (Elastic MapReduce): Optimize and maintain EMR clusters for large-scale big data processing. AWS Batch: Design and implement efficient workflows for batch processing workloads. Amazon SageMaker: Enable data science teams with scalable infrastructure for model training and deployment. AWS Glue: Develop ETL/ELT pipelines using Glue to ensure efficient data ingestion and transformation. AWS Lambda: Build serverless functions to automate processes and handle event-driven workloads. IAM Policies: Define and enforce fine-grained access controls to secure cloud resources and maintain governance. AWS IoT & Timestream: Design scalable solutions for collecting, storing, and analyzing time-series data. Amazon DynamoDB: Build and optimize high-performance NoSQL database solutions. Data Governance & Security: Implement best practices to ensure data privacy, compliance, and governance across the data architecture. Performance Optimization: Monitor, analyze, and tune AWS resources for performance efficiency and cost optimization. Develop and manage Infrastructure as Code (IaC) using AWS CloudFormation, Terraform, or equivalent tools to automate infrastructure deployment. Client Collaboration: Work closely with stakeholders to understand business objectives and ensure solutions align with client needs. Team Leadership & Mentorship: Provide technical guidance to delivery teams through design reviews, troubleshooting, and strategic planning. Continuous Innovation: Stay current with AWS service updates, industry trends, and emerging technologies to enhance solution delivery. Documentation & Knowledge Sharing: Create and maintain architecture diagrams, SOPs, and internal/external documentation to support ongoing operations and collaboration. Qualifications 7+ years of hands-on experience in cloud architecture and infrastructure (preferably AWS). 3+ years of experience specifically in architecting and managing Data Lake or big datadata solutions on AWS. Bachelor’s Degree in Computer Science, Information Systems, or a related field (preferred) AWS Certifications such as Solutions Architect Professional or Big Data Specialty. Experience with Snowflake, Matillion, or Fivetran in hybrid cloud environments. Familiarity with Azure or GCP cloud platforms. Understanding of machine learning pipelines and workflows. Technical Skills: Expertise in AWS services such as EMR, Batch, SageMaker, Glue, Lambda,IAM, IoT TimeStream, DynamoDB, and more. Strong programming skills in Python for scripting and automation. Proficiency in SQL and performance tuning for data pipelines and queries. Experience with IaC tools like Terraform or CloudFormation. Knowledge of big data frameworks such as Apache Spark, Hadoop, or similar. Data Governance & Security: Proven ability to design and implement secure solutions, with strong knowledge of IAM policies and compliance standards. Problem-Solving: Analytical and problem-solving mindset to resolve complex technical challenges. Collaboration: Exceptional communication skills to engage with technical and non-technicalstakeholders. Ability to lead cross-functional teams and provide mentorship. Benefits: Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? \uD83D\uDE80 \uD83D\uDCBB Apply today\uD83D\uDC47 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About this Opportunity It will be practically impossible for human brains to understand how to run and optimize next generation of wireless networks, i.e., 5G network with distributed edge compute, that will drive economic and social transformation for all aspects of society. Machine Learning (ML) and other Artificial Intelligence (AI) technologies will be vital for us to handle this opportunity. We are setting up a AI Accelerator Hub in India, to fast-track our strategy execution. Machine Intelligence, the combination of Machine Learning and other Artificial Intelligence technologies is what Ericsson uses to drive thought leadership to automate and transform Ericsson offerings and operations. MI is also a key competence for to enable new and emerging business. This includes development of models, frameworks and infrastructure where we in our advancements push the technology frontiers. We engage in both academic and industry collaborations and drive the digitalization of Ericsson and the Industry by developing state of the art solutions that simplify and automate processes in our products and services and build new value through data insights. Ericsson is now looking for Senior Data Scientists to significantly expand its global team for AI acceleration for our group in Bangalore . Do you have in depth understanding of Machine Learning and AI technologies? Do you want to apply and extend those skills to solve real complex problems with high societal impact; going beyond ML/AI for consumption and advertising? Then, you do want to join Ericsson’s global team of Engineers/Scientists pushing the technology frontiers to automate, simplify and add new value through large and complex data. What you will do As a Senior Data Scientist , you will need to have deep knowledge of data science and Machine Learning tools and technologies backed with strong programming skills to implement them. Your knowledge and experience in Data Science methodologies will be applied to solve challenging real-world problems as part of a highly dynamic and global team. You will work in a highly collaborative environment where you communicate and plan tasks and ideas. You will be working on high impact initiatives with other DS in Machine Intelligence to drive growth and economic profitability for Ericsson and its customers by accelerating current Ericsson offerings. Your contribution will also help to create new offerings in the areas of MI driven 4G and 5G network, distributed cloud, IoT and other emerging businesses. Key Responsibilities: Lead AI/ML feature/capability in a certain functional area of product/business Define the business metrics of success for AI/ML projects and translate them into model metrics Lead end-to-end development and deployment of Generative AI solutions (e.g., LLMs, diffusion models, RAG pipelines) for enterprise-scale use cases using MLOps platforms like Amazon Sagemaker, Bedrock. Design and implement architectures for vector search, embedding models, and Retrieval-Augmented Generation (RAG) systems using modern frameworks and tools (e.g., FAISS, Pinecone, LangChain). Fine-tune and evaluate large language models (LLMs) for domain-specific tasks such as text summarization, Q&A, content generation, and chat interfaces. Collaborate with product, engineering, and business stakeholders to translate vague problems into concrete Generative AI use cases with measurable impact. Develop secure, scalable, and production-grade AI pipelines for both batch and real-time generative use cases. Ensure ethical and responsible AI practices, including bias detection, explain ability, and content safety in generative outputs. Mentor junior team members in GenAI frameworks, best practices, and model performance tuning. Stay current with research and industry trends in Generative AI, LLMs, prompt engineering, and multimodal AI, applying cutting-edge techniques to business problems. Contribute to internal AI governance, tooling frameworks, and reusable components for accelerating GenAI development across teams. Work with huge datasets including petabytes of 4G/5G-networks, IoT and exogenous data Propose/select/test predictive models, recommendation engines, anomaly detection systems, statistical models, deep learning, computer vision, text mining, reinforcement learning and other ML systems Define the visualization and dashboarding requirements working closely with the business stakeholders Build proof-of-concepts for business opportunities using AI/ML and present them to business Lead functional and technical analysis within Ericsson businesses to define AI/ML driven business opportunities and build appropriate solutions Work with multiple data sources (internal/external as well as structured/unstructured) and apply the right feature engineering to AI based models Lead studies and creative usage of new and/or existing data sources. Work with Data Architects to leverage existing data models and build new ones as needed. What you will bring Minimum Experience Required - 7 years Job Location - Bangalore Bachelors/Masters/Ph.D. in Computer Science, Data Science, Artificial Intelligence, Machine Learning, Electrical Engineering or related disciplines from any of the reputed institutes. First Class, preferably with Distinction. Applied experience: 3+ years of ML and/or AI production level experience Strong Programming skills (R/Python) with proficiency in at least one Proven ability of leading AI/ML projects end-to-end with complete ownership Able to map the business metrics of success for AI/ML projects to model metrics Strong grounding in mathematics, probability, statistics needed for data analysis and experiments Strong hands-on experience with exploratory data analysis and visualization techniques Strong and hands-on in the use of current Machine Learning frameworks such as Python, R, H2O, Keras, TensorFlow, Spark ML etc. Ability to implement new algorithms and methodologies from leading open source initiatives and research papers Ability to work with semi-structured and unstructured data sets for AI/ML models Strong understanding of building AI models using Deep Neural Networks Strong hands-on experience in implementing a variety of Machine Learning techniques including model selection, validation, regularization and hyperparameter tuning Good Knowledge of using ensembles and stacking techniques to solve complex ML problems Hands on working in Big Data technologies such as Hadoop, Cassandra etc. Able to source data from multiple data sources and combine them appropriately to build ML models. Preferred Qualifications: Good communication skills in written and spoken English Creativity and ability to formulate problems and solve them independently Certifying MI MOOCS, a plus Applications/Domain-knowledge in Telecommunication and/or IoT, a plus. Experience with data visualization and dashboard creation is a plus Knowledge of Cognitive models is a plus Ability to work independently with high energy, enthusiasm and persistence Experience in partnering and collaborative co-creation, i.e., working with complex multiple stakeholder business units, global customers, technology and other ecosystem partners in a multi-culture, global matrix organization with sensitivity and persistence. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Looking for experienced Full Stack engineer who is passionate about application development. Primary responsibilities will include hands-on development of Medidata's software applications with Java, Clojure, Python, R, React/Typescript, and Amazon Web Services. Responsibilities 8+ years hands on experience in Java , ready to shift gears to Clojure (a must), python as needed Working experience with AWS (cloud watch , Lambdas, Sagemaker, familiar with container orchestration ECS / EKS) . Proficient in React, including both class components and hooks Strong understanding of Redux for state management (actions, reducers, middleware) Experienced with TypeScript in large-scale front-end applications Skilled in building reusable components and managing component lifecycles Familiar with async data flows and handling side effects (e.g., redux-thunk) Comfortable with type-safe patterns and strict typing in UI logic Capable of writing unit and integration tests (Jest, React Testing Library) Knowledgeable in performance optimization and render lifecycle debugging Develop, test, document, deploy and maintain applications in a production environment. Apply good technical practices such as continuous integration, test automation, and Github Pull Request reviews. Apply your experience with enterprise design patterns and principles, and distributed system architecture to implement product features. Work with Agile team members, collaborating with testers to ensure quality and with product managers to turn great ideas into detailed requirements. Promote and ensure the high quality, performance, security of our solutions. Promote and implement DevOps, CI/CD and automation strategies. Show more Show less

Posted 1 week ago

Apply

12.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

We are seeking an experienced Devops/ AIOps Architect to design, architect, and implement an AI-driven operations solution that integrates various cloud-native services across AWS, Azure, and cloud-agnostic environments. The AIOps platform will be used for end-to-end machine learning lifecycle management, automated incident detection, and root cause analysis (RCA). The architect will lead efforts in developing a scalable solution utilizing data lakes, event streaming pipelines, ChatOps integration, and model deployment services. This platform will enable real-time intelligent operations in hybrid cloud and multi-cloud setups. Responsibilities Assist in the implementation and maintenance of cloud infrastructure and services Contribute to the development and deployment of automation tools for cloud operations Participate in monitoring and optimizing cloud resources using AIOps and MLOps techniques Collaborate with cross-functional teams to troubleshoot and resolve cloud infrastructure issues Support the design and implementation of scalable and reliable cloud architectures Conduct research and evaluation of new cloud technologies and tools Work on continuous improvement initiatives to enhance cloud operations efficiency and performance Document cloud infrastructure configurations, processes, and procedures Adhere to security best practices and compliance requirements in cloud operations Requirements Bachelor’s Degree in Computer Science, Engineering, or related field 12+ years of experience in DevOps roles, AIOps, OR Cloud Architecture Hands-on experience with AWS services such as SageMaker, S3, Glue, Kinesis, ECS, EKS Strong experience with Azure services such as Azure Machine Learning, Blob Storage, Azure Event Hubs, Azure AKS Strong experience with Infrastructure as Code (IAC)/ Terraform/ Cloud formation Proficiency in container orchestration (e.g., Kubernetes) and experience with multi-cloud environments Experience with machine learning model training, deployment, and data management across cloud-native and cloud-agnostic environments Expertise in implementing ChatOps solutions using platforms like Microsoft Teams, Slack, and integrating them with AIOps automation Familiarity with data lake architectures, data pipelines, and inference pipelines using event-driven architectures Strong programming skills in Python for rule management, automation, and integration with cloud services Nice to have Any certifications in the AI/ ML/ Gen AI space Show more Show less

Posted 1 week ago

Apply

7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Key Responsibilities: Cloud-Based Development: Design, develop, and deploy scalable solutions using AWS services such as S3, Kinesis, Lambda, Redshift, DynamoDB, Glue, and SageMaker. Data Processing & Pipelines: Implement efficient data pipelines and optimize data processing using pandas, Spark, and PySpark. Machine Learning Operations (MLOps): Work with model training, model registry, model deployment, and monitoring using AWS SageMaker and related services. Infrastructure-as-Code (IaC): Develop and manage AWS infrastructure using AWS CDK and CloudFormation to enable automated deployments. CI/CD Automation: Set up and maintain CI/CD pipelines using GitHub, AWS CodePipeline, and CodeBuild for streamlined development workflows. Logging & Monitoring: Implement robust monitoring and logging solutions using Splunk, DataDog, and AWS CloudWatch to ensure system performance and reliability. Code Optimization & Best Practices: Write high-quality, scalable, and maintainable Python code while adhering to software engineering best practices. Collaboration & Mentorship: Work closely with cross-functional teams, providing technical guidance and mentorship to junior developers. Qualifications & Requirements 7+ years of experience in software development with a strong focus on Python. Expertise in AWS services, including S3, Kinesis, Lambda, Redshift, DynamoDB, Glue, and SageMaker. Proficiency in Infrastructure-as-Code (IaC) tools like AWS CDK and CloudFormation. Experience with data processing frameworks such as pandas, Spark, and PySpark. Understanding of machine learning concepts, including model training, deployment, and monitoring. Hands-on experience with CI/CD tools such as GitHub, CodePipeline, and CodeBuild. Proficiency in monitoring and logging tools like Splunk and DataDog. Strong problem-solving skills, analytical thinking, and the ability to work in a fast-paced, collaborative environment. Preferred Skills & Certifications AWS Certifications (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer, AWS Certified Machine Learning). Experience with containerization (Docker, Kubernetes) and serverless architectures. Familiarity with big data technologies such as Apache Kafka, Hadoop, or AWS EMR. Strong understanding of distributed computing and scalable architectures. Skills Python,MLOps, AWS Show more Show less

Posted 1 week ago

Apply

2.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Summary JOB DESCRIPTION The Software Developer codes software applications for pipeline operations and planning that provide critical operations support for client companies. The software developer should be a self-starter willing to delve into existing code as well as writing new applications. This position includes fundamental product analysis and design, source code development, and project driven development. Liquids Management System (LMS) Suite of products is a total software solution for liquid hydrocarbon logistics. Synthesis, one of the products within LMS, is an enterprise web based system that tracks liquid hydrocarbons from customer contracts, orders, inventory all the way to charges, billing and invoicing (Order to Cash). It is a highly configurable and extensible system which allows it to cater to the unique business processes of energy companies world-wide In this Role, Your Responsibilities Will Be: Determine coding design requirements from function and detailed specification o Analyze software bugs and affect code repairs Design, develop, and deliver specified software features Produce usable documentation and test procedures Deal directly with the end clients to assist in software validation and deployment Explore and evaluate opportunities to integrate AI/ML capabilities into the LMS suite, particularly for predictive analytics, optimization, and automation. Who You Are: You quickly and decisively act in constantly evolving, unexpected situations. You adjust communication content and style to meet the needs of diverse partners. You always keep the end in sight; puts in extra effort to meet deadlines. You analyze multiple and diverse sources of information to define problems accurately before moving solutions. You observe situational and group dynamics and select best-fit approach. For This Role, You Will Need: BS in Computer Science, Engineering, Mathematics or technical equivalent 2 to 7 years of experience required. Strong problem solving skills Strong Programming Skills (.NET stack, C#, ASP.NET, Web Development technologies, HTML/5, Javascript, WCF, MS SQLServer Transact-SQL). Strong communication skills (client facing). Flexibility to work harmoniously with a small development team. Familiarity with AI/ML concepts and techniques, including traditional machine learning algorithms (e.g., regression, classification, clustering) and modern Large Language Models (LLMs). Experience with machine learning libraries and frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Experience in developing and deploying machine learning models. Understanding of data preprocessing, feature engineering, and model evaluation techniques. Preferred Qualifications that Set You Apart: Experience with liquid pipeline operations or volumetric accounting a plus Knowledge of oil and gas pipeline industry, also a plus. Experience with cloud-based AI/ML services (e.g., Azure Machine Learning, AWS SageMaker, Google Cloud AI Platform) is a plus. Our Culture & Commitment to You At Emerson, we prioritize a workplace where every employee is valued, respected, and empowered to grow. We foster an environment that encourages innovation, collaboration, and diverse perspectives—because we know that great ideas come from great teams. Our commitment to ongoing career development and growing an inclusive culture ensures you have the support to thrive. Whether through mentorship, training, or leadership opportunities, we invest in your success so you can make a lasting impact. We believe diverse teams, working together are key to driving growth and delivering business results. We recognize the importance of employee wellbeing. We prioritize providing competitive benefits plans, a variety of medical insurance plans, Employee Assistance Program, employee resource groups, recognition, and much more. Our culture offers flexible time off plans, including paid parental leave (maternal and paternal), vacation and holiday leave. About Us WHY EMERSON Our Commitment to Our People At Emerson, we are motivated by a spirit of collaboration that helps our diverse, multicultural teams across the world drive innovation that makes the world healthier, safer, smarter, and more sustainable. And we want you to join us in our bold aspiration. We have built an engaged community of inquisitive, dedicated people who thrive knowing they are welcomed, trusted, celebrated, and empowered to solve the world’s most complex problems — for our customers, our communities, and the planet. You’ll contribute to this vital work while further developing your skills through our award-winning employee development programs. We are a proud corporate citizen in every city where we operate and are committed to our people, our communities, and the world at large. We take this responsibility seriously and strive to make a positive impact through every endeavor. At Emerson, you’ll see firsthand that our people are at the center of everything we do. So, let’s go. Let’s think differently. Learn, collaborate, and grow. Seek opportunity. Push boundaries. Be empowered to make things better. Speed up to break through. Let’s go, together. Accessibility Assistance or Accommodation If you have a disability and are having difficulty accessing or using this website to apply for a position, please contact: idisability.administrator@emerson.com . About Emerson Emerson is a global leader in automation technology and software. Through our deep domain expertise and legacy of flawless execution, Emerson helps customers in critical industries like life sciences, energy, power and renewables, chemical and advanced factory automation operate more sustainably while improving productivity, energy security and reliability. With global operations and a comprehensive portfolio of software and technology, we are helping companies implement digital transformation to measurably improve their operations, conserve valuable resources and enhance their safety. We offer equitable opportunities, celebrate diversity, and embrace challenges with confidence that, together, we can make an impact across a broad spectrum of countries and industries. Whether you’re an established professional looking for a career change, an undergraduate student exploring possibilities, or a recent graduate with an advanced degree, you’ll find your chance to make a difference with Emerson. Join our team – let’s go! No calls or agencies please. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

We’re Eagerminds—an AI-first product studio that helps startups ship cloud-native features in days. If you love turning AWS knobs, automating everything, and squeezing every rupee out of a bill, we’d love to meet you. What you’ll own Build & run production workloads on AWS (VPC, ECS/EKS, Lambda, RDS, S3, CloudFront, IAM, etc.). Write clean Infrastructure-as-Code (Terraform/CDK) and wire up CI/CD with GitHub Actions. Keep us safe & compliant—GuardDuty, Config, KMS, Security Hub, HIPAA controls. Slash costs via right-sizing, RI/SP planning, and Graviton migrations. Monitor, alert, and debug with CloudWatch, Prometheus/Grafana, and runbooks. What you bring 2–3 years of solid AWS production experience. Strong Linux & networking fundamentals plus scripting (Bash/Python or similar). Hands-on with Docker and either ECS or EKS. Clear communication in English/Hindi/Gujarati and a bias for action. Bonus points Serverless chops (EventBridge, Step Functions, SAM). Exposure to compliance (HIPAA, SOC 2) or FinOps dashboards. Familiarity with AI/ML stacks on AWS (Bedrock, SageMaker). Why Eagerminds Direct impact—your work ships the same week. Founder-level visibility in a small, no-bureaucracy team. Competitive salary + performance bonus. MacBook, 27″ monitor, flexible hours (on-site), Friday demo days, learning stipend. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job description Job Role: Ai Engineer Experience: 3 to 5 Years Location : Client Office – Pune, India Job Type : Full-Time Department : Artificial Intelligence / Engineering Work Mode : On-site at client location About the Role We are seeking a highly skilled and versatile Senior AI Engineer with over 3 to 5 years of hands-on experience to join our client’s team in Pune. This role focuses on designing, developing, and deploying cutting-edge AI and machine learning solutions for high-scale, high-concurrency applications where security, scalability, and performance are paramount. You will work closely with cross-functional teams, including data scientists, DevOps engineers, security specialists, and business stakeholders, to deliver robust AI solutions that drive measurable business impact in dynamic, large-scale environments. Job Summary: We are seeking a passionate and experienced Node.js Developer to join our backend engineering team. As a key contributor, you will be responsible for building scalable, high-performance APIs, microservices, and backend systems that power our products and services. You will leverage modern technologies and best practices to design and implement robust, maintainable, and efficient solutions. You should have a deep understanding of Node.js, NestJS, Express.js, along with hands-on experience designing and building complex backend systems. Key Responsibilities Architect, develop, and deploy advanced machine learning and deep learning models across domains like NLP, computer vision, predictive analytics, or reinforcement learning, ensuring scalability and performance under high-traffic conditions. Preprocess, clean, and analyze large-scale structured and unstructured datasets using advanced statistical, ML, and big data techniques. Collaborate with data engineering and DevOps teams to integrate AI/ML models into production-grade pipelines, ensuring seamless operation under high concurrency. Optimize models for latency, throughput, accuracy, and resource efficiency, leveraging distributed computing and parallel processing where necessary. Implement robust security measures, including data encryption, secure model deployment, and adherence to compliance standards (e.g., GDPR, CCPA). Partner with client-side technical teams to translate complex business requirements into scalable, secure AI-driven solutions. Stay at the forefront of AI/ML advancements, experimenting with emerging tools, frameworks, and techniques (e.g., generative AI, federated learning, or AutoML). Write clean, modular, and maintainable code, along with comprehensive documentation and reports for model explainability, reproducibility, and auditability. Proactively monitor and maintain deployed models, ensuring reliability and performance in production environments with millions of concurrent users. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or a related technical field. 5+ years of experience building and deploying AI/ML models in production environments with high-scale traffic and concurrency. Advanced proficiency in Python and modern AI/ML frameworks, including TensorFlow, PyTorch, Scikit-learn, and JAX. Hands-on expertise in at least two of the following domains: NLP, computer vision, time-series forecasting, or generative AI. Deep understanding of the end-to-end ML lifecycle, including data preprocessing, feature engineering, hyperparameter tuning, model evaluation, and deployment. Proven experience with cloud platforms (AWS, GCP, or Azure) and their AI/ML services (e.g., SageMaker, Vertex AI, or Azure ML). Strong knowledge of containerization (Docker, Kubernetes) and RESTful API development for secure and scalable model deployment. Familiarity with secure coding practices, data privacy regulations, and techniques for safeguarding AI systems against adversarial attacks. Preferred Skills Expertise in MLOps frameworks and tools such as MLflow, Kubeflow, or SageMaker for streamlined model lifecycle management. Hands-on experience with large language models (LLMs) or generative AI frameworks (e.g., Hugging Face Transformers, LangChain, or Llama). Proficiency in big data technologies and orchestration tools (e.g., Apache Spark, Airflow, or Kafka) for handling massive datasets and real-time pipelines. Experience with distributed training techniques (e.g., Horovod, Ray, or TensorFlow Distributed) for large-scale model development. Knowledge of CI/CD pipelines and infrastructure-as-code tools (e.g., Terraform, Ansible) for scalable and automated deployments. Familiarity with security frameworks and tools for AI systems, such as model hardening, differential privacy, or encrypted computation. Proven ability to work in global, client-facing roles, with strong communication skills to bridge technical and business teams. Share the CV on hr.mobilefirst@gmail.com/6355560672 Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Scientist Location: Remote Job Type: Full-Time | Permanent Experience Required: 4+ Years About the Role: We are looking for a highly motivated and analytical Data Scientist with 4 years of industry experience to join our data team. The ideal candidate will have a strong background in Python , SQL , and experience deploying machine learning models using AWS SageMaker . You will be responsible for solving complex business problems with data-driven solutions, developing models, and helping scale machine learning systems into production environments. Key Responsibilities: Model Development: Design, develop, and validate machine learning models for classification, regression, and clustering tasks. Work with structured and unstructured data to extract actionable insights and drive business outcomes. Deployment & MLOps: Deploy machine learning models using AWS SageMaker , including model training, tuning, hosting, and monitoring. Build reusable pipelines for model deployment, automation, and performance tracking. Data Exploration & Feature Engineering: Perform data wrangling, preprocessing, and feature engineering using Python and SQL . Conduct EDA (exploratory data analysis) to identify patterns and anomalies. Collaboration: Work closely with data engineers, product managers, and business stakeholders to define data problems and deliver scalable solutions. Present model results and insights to both technical and non-technical audiences. Continuous Improvement: Stay updated on the latest advancements in machine learning, AI, and cloud technologies. Suggest and implement best practices for experimentation, model governance, and documentation. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, or related field. 4+ years of hands-on experience in data science, machine learning, or applied AI roles. Proficiency in Python for data analysis, model development, and scripting. Strong SQL skills for querying and manipulating large datasets. Hands-on experience with AWS SageMaker , including model training, deployment, and monitoring. Solid understanding of machine learning algorithms and techniques (supervised/unsupervised). Familiarity with libraries such as Pandas, NumPy, Scikit-learn, Matplotlib, and Seaborn. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Kerala, India

On-site

Linkedin logo

Job Requirements We are seeking an experienced and visionary AI Architect / Subject Matter Expert (SME) to lead the design, development, and deployment of scalable artificial intelligence and machine learning (AI/ML) solutions. The ideal candidate will have deep expertise in AI technologies, hands-on experience in model development, and the ability to translate business challenges into technical strategies. As an AI Architect, you will serve as a strategic advisor, technology leader, and innovation driver, ensuring our AI solutions are impactful, ethical, and aligned with enterprise goals. Roles & Responsibilities Define the AI vision and roadmap aligned with organizational objectives. Design enterprise-grade AI/ML architectures integrating with data pipelines, APIs, cloud services, and business applications. Evaluate and recommend AI platforms, tools, and frameworks. Lead the end-to-end lifecycle of ML models: from data exploration, feature engineering, and training to deployment and monitoring. Work on supervised, unsupervised, and reinforcement learning models, as well as generative AI (LLMs, diffusion models, etc.). Apply deep learning (CNNs, RNNs, Transformers) for use cases in NLP, computer vision, or forecasting. Collaborate with data engineers to build robust data pipelines and preprocessing workflows. Implement MLOps practices for continuous integration, model versioning, retraining, and deployment. Integrate models with cloud services (AWS Sagemaker, Azure ML, GCP Vertex AI, etc.). Ensure AI solutions are transparent, explainable, fair, and compliant with data privacy laws (e.g., GDPR, HIPAA). Establish AI governance policies, including risk assessments, audits, and bias detection mechanisms. Act as a technical SME across AI initiatives, mentoring data scientists, engineers, and analysts. Lead cross-functional teams in developing POCs and MVPs for high-impact AI projects. Collaborate with executives, product managers, and stakeholders to prioritize use cases. Stay abreast of cutting-edge AI research, tools, and industry trends. Drive innovation through rapid prototyping, collaboration with academia/startups, and participation in AI communities. Publish internal white papers or contribute to patents and publications. Work Experience Required Skills Bachelor's or Master’s in Computer Science, AI, Data Science, or related field (PhD preferred). 8+ years in software or data engineering, with 4+ years in AI/ML roles. Proficient in Python and frameworks like TensorFlow, PyTorch, Scikit-learn, or Hugging Face Transformers. Strong grasp of algorithms, statistics, and probability. Experience deploying models in production via REST APIs, batch scoring, or real-time inference. Hands-on with cloud AI services (AWS, Azure, GCP). Experience with data tools: Spark, Kafka, Airflow, Snowflake, etc. Preferred Qualifications Certifications in AI/ML (e.g., AWS Certified Machine Learning, Google Cloud ML Engineer). Experience with LLMs, RAG pipelines, or enterprise chatbot systems. Knowledge of vector databases (e.g., Pinecone, Weaviate, FAISS). Contributions to open-source AI projects or academic publications. Experience in regulated industries (healthcare, finance, etc.) applying ethical AI principles. Show more Show less

Posted 1 week ago

Apply

Exploring Sagemaker Jobs in India

Sagemaker is a rapidly growing field in India, with many companies looking to hire professionals with expertise in this area. Whether you are a seasoned professional or a newcomer to the tech industry, there are plenty of opportunities waiting for you in the sagemaker job market.

Top Hiring Locations in India

If you are looking to land a sagemaker job in India, here are the top 5 cities where companies are actively hiring for roles in this field:

  • Bangalore
  • Hyderabad
  • Pune
  • Mumbai
  • Chennai

Average Salary Range

The salary range for sagemaker professionals in India can vary based on experience and location. On average, entry-level professionals can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

In the sagemaker field, a typical career progression may look like this:

  • Junior Sagemaker Developer
  • Sagemaker Developer
  • Senior Sagemaker Developer
  • Sagemaker Tech Lead

Related Skills

In addition to expertise in sagemaker, professionals in this field are often expected to have knowledge of the following skills:

  • Machine Learning
  • Data Science
  • Python programming
  • Cloud computing (AWS)
  • Deep learning

Interview Questions

Here are 25 interview questions that you may encounter when applying for sagemaker roles, categorized by difficulty level:

  • Basic:
  • What is Amazon SageMaker?
  • How does SageMaker differ from traditional machine learning?
  • What is a SageMaker notebook instance?

  • Medium:

  • How do you deploy a model in SageMaker?
  • Can you explain the process of hyperparameter tuning in SageMaker?
  • What is the difference between SageMaker Ground Truth and SageMaker Processing?

  • Advanced:

  • How would you handle model drift in a SageMaker deployment?
  • Can you compare SageMaker with other machine learning platforms in terms of scalability and flexibility?
  • How do you optimize a SageMaker model for cost efficiency?

Closing Remark

As you explore opportunities in the sagemaker job market in India, remember to hone your skills, stay updated with industry trends, and approach interviews with confidence. With the right preparation and mindset, you can land your dream job in this exciting and evolving field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies