Jobs
Interviews

1558 Sagemaker Jobs - Page 26

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Title: Sr. Data Engineer Location: Office-Based (Ahmedabad, India) About Hitech Hitech is a leading provider of Data, Engineering Services, and Business Process Solutions. With robust delivery centers in India and global sales offices in the USA, UK, and the Netherlands, we enable digital transformation for clients across industries including Manufacturing, Real Estate, and e-Commerce. Our Data Solutions practice integrates automation, digitalization, and outsourcing to deliver measurable business outcomes. We are expanding our engineering team and looking for an experienced Sr. Data Engineer to design scalable data pipelines, support ML model deployment, and enable insight-driven decisions. Position Summary We are seeking a Data Engineer / Lead Data Engineer with deep experience in data architecture, ETL pipelines, and advanced analytics support. This role is crucial for designing robust pipelines to process structured and unstructured data, integrate ML models, and ensure data reliability. The ideal candidate will be proficient in Python, R, SQL, and cloud-based tools, and possess hands-on experience in creating end-to-end data engineering solutions that support data science and analytics teams. Key Responsibilities Design and optimize data pipelines to ingest, transform, and load data from diverse sources. Build programmatic ETL pipelines using SQL and related platforms. Understand complex data structures and perform data transformation effectively. Develop and support ML models such as Random Forest, SVM, Clustering, Regression, etc. Create and manage scalable, secure data warehouses and data lakes. Collaborate with data scientists to structure data for analysis and modeling. Define solution architecture for layered data stacks ensuring high data quality. Develop design artifacts including data flow diagrams, models, and functional documents. Work with technologies such as Python, R, SQL, MS Office, and SageMaker. Conduct data profiling, sampling, and testing to ensure reliability. Collaborate with business stakeholders to identify and address data use cases. Qualifications & Experience 4 to 6 years of experience in data engineering, ETL development, or database administration. Bachelor’s degree in Mathematics, Computer Science, or Engineering (B.Tech/B.E.). Postgraduate qualification in Data Science or related discipline preferred. Strong proficiency in Python, SQL, Advanced MS Office tools, and R. Familiarity with ML concepts and integrating models into pipelines. Experience with NoSQL systems like MongoDB, Cassandra, or HBase. Knowledge of Snowflake, Databricks, and other cloud-based data tools. ETL tool experience and understanding of data integration best practices. Data modeling skills for relational and NoSQL databases. Knowledge of Hadoop, Spark, and scalable data processing frameworks. Experience with SciKit, TensorFlow, Pytorch, GPT, PySpark, etc. Ability to build web scrapers and collect data from APIs. Experience with Airflow or similar tools for pipeline automation. Strong SQL performance tuning skills in large-scale environments. What We Offer Competitive compensation package based on skills and experience. Opportunity to work with international clients and contribute to high-impact data projects. Continuous learning and professional growth within a tech-forward organization. Collaborative and inclusive work environment. If you're passionate about building data-driven infrastructure to fuel analytics and AI applications, we look forward to connecting with you. Anand Soni Hitech Digital Solutions

Posted 1 month ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Minimum of (3+) years of experience in AI-based application development. Fine-tune pre-existing models to improve performance and accuracy. Experience with TensorFlow or PyTorch, Scikit-learn, or similar ML frameworks and familiarity with APIs like OpenAI or vertex AI Experience with NLP tools and libraries (e.g., NLTK, SpaCy, GPT, BERT). Implement frameworks like LangChain, Anthropics Constitutional AI, OpenAIs, Hugging Face, and Prompt Engineering techniques to build robust and scalable AI applications. Evaluate and analyze RAG solution and Utilise the best-in-class LLM to define customer experience solutions (Fine tune Large Language models (LLM)). Architect and develop advanced generative AI solutions leveraging state-of-the-art language models (LLMs) such as GPT, LLaMA, PaLM, BLOOM, and others. Strong understanding and experience with open-source multimodal LLM models to customize and create solutions. Explore and implement cutting-edge techniques like Few-Shot Learning, Reinforcement Learning, Multi-Task Learning, and Transfer Learning for AI model training and fine-tuning. Proficiency in data preprocessing, feature engineering, and data visualization using tools like Pandas, NumPy, and Matplotlib. Optimize model performance through experimentation, hyperparameter tuning, and advanced optimization techniques. Proficiency in Python with the ability to get hands-on with coding at a deep level. Develop and maintain APIs using Python's FastAPI, Flask, or Django for integrating AI capabilities into various systems. Ability to write optimized and high-performing scripts on relational databases (e.g., MySQL, PostgreSQL) or non-relational database (e.g., MongoDB or Cassandra) Enthusiasm for continuous learning and professional developement in AI and leated technologies. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Knowledge of cloud services like AWS, Google Cloud, or Azure. Proficiency with version control systems, especially Git. Familiarity with data pre-processing techniques and pipeline development for Al model training. Experience with deploying models using Docker, Kubernetes Experience with AWS Bedrock, and Sagemaker is a plus Strong problem-solving skills with the ability to translate complex business problems into Al solutions.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Dehradun, Uttarakhand, India

Remote

Job description About Yogotribe Platform: Yogotribe is building a transformative digital platform dedicated to wellness, connecting seekers with a diverse range of yoga retreats, meditation centers, Ayurveda clinics, and holistic wellness experiences. Our strategic approach involves a robust initial deployment using Odoo as the core platform. The foundational Phase 1 is already established on a scalable and secure AmistacX Odoo and AWS backend infrastructure, fully integrated and stable on Amazon EC2. This setup provides a solid foundation for all Odoo functionalities, setting the stage for future evolution towards a microservices-driven architecture. We are seeking a talented and experienced External Odoo Developer to join us on a project basis. Your primary responsibility will be to rapidly develop professional and high-quality custom Odoo modules to complete all remaining functionalities within our existing, integrated AWS ecosystem. Role Summary: As an Odoo Developer for Yogotribe, you will be responsible for the design, development, and implementation of new custom Odoo modules and enhancements within our established Odoo 17.x environment. While the AWS backend integration is already in place and stable, you will focus on building the Odoo-side functionalities that utilize these existing integrations. This is a project-based assignment focused on delivering specific functionalities. Your ability to work independently, adhere to Odoo best practices, and effectively leverage the established AWS services through Odoo will be paramount to your success. Key Responsibilities: Custom Odoo Module Development: Design, develop, and implement new Odoo modules and features using Python, Odoo ORM, QWeb, XML, and JavaScript, aligned with project requirements to complete all envisioned functionalities. Leveraging Existing AWS Integrations: Develop Odoo functionalities that seamlessly interact with our already established AWS backend, utilizing existing integrations for services such as: Data storage (AWS S3 for attachments). Eventing and messaging (AWS SQS, AWS SNS). Email services (AWS SES). Interactions with AWS Lambda for AI/ML processing (e.g., Amazon Comprehend, Rekognition). Code Quality & Best Practices: Write clean, maintainable, well-documented, and efficient code, adhering to Odoo development guidelines and industry best practices. Testing & Debugging: Conduct thorough testing of developed modules, identify and resolve bugs, and ensure module stability and performance within the integrated Odoo-AWS environment. Documentation: Create clear and concise technical documentation for developed Odoo modules, including design specifications, API usage, and deployment notes. Collaboration: Work closely with the core team to understand project requirements, provide technical insights, and deliver solutions that meet business needs. Deployment Support: Assist in the deployment and configuration of developed Odoo modules within the AWS EC2 environment. Required Skills & Experience: Odoo Development Expertise (3+ years): Strong proficiency in Python development within the Odoo framework (ORM, API, XML, QWeb). Extensive experience in developing and customizing Odoo modules (e.g., sales, CRM, accounting, website, custom models). Familiarity with Odoo 17.0 development practices is highly desirable. Solid understanding of Odoo architecture and module structure. Understanding of Odoo on AWS: Proven understanding of how Odoo operates within an AWS EC2 environment. Familiarity with the use of existing AWS services integrated with Odoo, particularly S3, SQS/SNS, and SES. Knowledge of AWS IAM, VPC, Security Groups, and general cloud security concepts relevant to understanding the existing Odoo deployment. Database Proficiency: Experience with PostgreSQL, including schema design and query optimization. Version Control: Proficient with Git for source code management. Problem-Solving: Excellent analytical and debugging skills to troubleshoot complex Odoo functionalities within an integrated system. Communication: Strong verbal and written communication skills for effective collaboration in a remote, project-based setting. Independent Work Ethic: Proven ability to manage project tasks, deliver on time, and work effectively with minimal supervision. Desirable (Bonus) Skills: Experience with front-end technologies for Odoo website customization (HTML, CSS/Tailwind CSS, JavaScript frameworks). Knowledge of Odoo performance optimization techniques. Familiarity with CI/CD pipelines (e.g., AWS CodePipeline, CodeBuild, CodeDeploy) from an Odoo module deployment perspective. Understanding of microservices architecture concepts and patterns, especially in the context of a future migration from the Odoo monolith. Prior experience with AWS AI/ML services (e.g., Comprehend, Rekognition, Personalize, SageMaker, Lex) is a plus, specifically in how Odoo might interact with them via existing integrations. Assignment Type & Duration: This is a project-based assignment with clearly defined deliverables and timelines for specific Odoo module development. The initial project scope will be discussed during the interview process. The feasibility of support extension or future project engagements will be decided based on the successful outcome and quality of deliverables for the current project. To Apply: Please submit your resume outlining your relevant Odoo development experience at hr@yogotribe.com, and fill up the google form : https://docs.google.com/forms/d/e/1FAIpQLSfSIHIYvr1Vlq7a98YdMXdf_XLoZfSTi88FkCYtbtE5HLTgOQ/viewform?usp=header

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Senior Applied Scientist Noida, Uttar Pradesh, India Date posted Jul 03, 2025 Job number 1833716 Work site Microsoft on-site only Travel 0-25 % Role type Individual Contributor Profession Research, Applied, & Data Sciences Discipline Applied Sciences Employment type Full-Time Overview We are part of Web Experiences and Services Team (WEST) within the Office Online Product Group, focused on building AI Ops solutions for Online Microsoft Word, Excel, PowerPoint, OneNote, and their shared services. Our mission is to leverage AI and ML to enhance live site management, automate incident resolution, accelerate root cause analysis and prevent incidents proactively. We are focussing on anomaly detection, predictive analytics, and error log analysis to enable more scalable and generalized solutions, driving reliability and efficiency across Office Online applications. We are looking for a Senior Applied Scientist who is passionate about applying AI and ML to incident management and site reliability. You will work as part of a multidisciplinary team, collaborating with engineers, data scientists, and domain experts to develop state-of-the-art solutions that will transform how Office Online handles site reliability and incident prevention. At Microsoft, we are committed to diversity, inclusion, and innovation, ensuring that we build great workplaces and great products. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Required Qualifications: Bachelor's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 5+ years related experience (e.g., statistics predictive analytics, research) OR Master's Degree in Statistics, Econometrics, Computer Science, Electrical or Computer Engineering, or related field AND 5+ years related experience (e.g., statistics, predictive analytics, research) OR equivalent experience. Excellent coding and debugging skills with deep understanding of ML\AI algorithms and data science problems. Experience with ML techniques such as deep learning, predictive modelling, time series data analysis Excellent communication skills, including the ability to translate complex AI concepts into actionable insights for product teams Preferred Qualifications: Passion for new technologies, learning and adapting quickly, end user quality and customer satisfaction. Understanding of AI-driven operations, site reliability engineering (SRE), and production ML systems. Expertise in AI Ops, anomaly detection, error log analysis and incident management solutions. Awareness and understanding of emerging research and technologies related to live site and incident management like Agentic workflows for site reliability management. Effective verbal, visual and written communication skills. Responsibilities Collaborate with engineers, product teams, and partners to drive innovation in AI-powered site reliability. Develop and implement AI-driven scalable solutions for incident management and prevention, incident root cause analysis across Office Online applications. Build AI\ML powered solutions like anomaly detection, predictive analytics, and error log analysis etc. for faster mitigation and prevention of incidents. Ensure end-to-end integration of ML\AI powered solutions in production, including deployment, monitoring, and refinement, leveraging cloud-based machine learning platforms (e.g., AWS SageMaker, Azure ML Service, Databricks) and MLOps tools (MLflow, Tecton, Pinecone, Feature Stores) Prototype new approaches in ML\AI, SLM, AI Agents, Agentic workflows etc. to design, run, and analyze experiments for incident detection, mitigation and resolution. Fundamentals: champion and set example in customer obsession, data security, performance, observability and reliability. Optimize AI pipelines, automate responses to incidents, and create a feedback loop for continuous improvement. Stay ahead of emerging trends in AI/ML. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Designation- Data Science/Machine Learning Instructor BU- Working Professional Location- Mumbai- Andheri (Hybrid) Salary upto- 10LPA- 12LPA Weekoffs - Wednesday & Thursday or Thursday & Friday. Key Responsibilities: Live Instruction: Teach core DS/ML topics (supervised & unsupervised learning, deep learning, model evaluation, MLOps) through interactive sessions. Curriculum Collaboration: Work with content teams to design labs, code walkthroughs, and real-world case studies using Python, scikit-learn, TensorFlow/PyTorch, and cloud-based DS/ML services. Learner Support: Field technical questions, debug code, review notebooks, and provide actionable feedback on assignments and projects. Project Mentorship: Guide capstone work—e.g., image/video models, NLP pipelines, recommendation systems, and deployment pipelines. Continuous Improvement: Analyze learner performance data to refine modules, introduce emerging topics (e.g., transformers, generative models), and enhance assessments. Requirements: 3+ years of industry or academic experience in DS/ML or AI. Minimum Master’s degree in DS or ML &AI or CS specialization in AI & DS/ML Proficiency in Python and ML frameworks (scikit-learn, TensorFlow or PyTorch). Familiarity with MLOps tools (Docker, Kubernetes, MLflow) and cloud ML services (AWS SageMaker, GCP AI Platform, or Azure ML). Excellent presentation and mentoring skills in live and small-group settings. Prior teaching or edtech experience is a strong plus.

Posted 1 month ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities An AI Data Scientist at IBM is not just a job title - it’s a mindset. You’ll leverage the watsonx,AWS Sagemaker,Azure Open AI platform to co-create AI value with clients, focusing on technology patterns to enhance repeatability and delight clients. We are seeking an experienced and innovative AI Data Scientist to be specialized in foundation models and large language models. In this role, you will be responsible for architecting and delivering AI solutions using cutting-edge technologies, with a strong focus on foundation models and large language models. You will work closely with customers, product managers, and development teams to understand business requirements and design custom AI solutions that address complex challenges. Experience with tools like Github Copilot, Amazon Code Whisperer etc. is desirable. Success is our passion, and your accomplishments will reflect this, driving your career forward, propelling your team to success, and helping our clients to thrive. Day-to-Day Duties Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Documentation and Knowledge Sharing: Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides. Contribute to internal knowledge sharing initiatives and mentor new team members. Industry Trends and Innovation: Stay up to date with the latest trends and advancements in AI, foundation models, and large language models. Evaluate emerging technologies, tools, and frameworks to assess their potential impact on solution design and implementation Preferred Education Master's Degree Required Technical And Professional Expertise Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms (e.g. Kubernetes, AWS, Azure, GCP) and related services is a plus. Experience and working knowledge in COBOL & JAVA would be preferred Having experience in Code generation, code matching & code translation leveraging LLM capabilities would be a Big plus (e.g. Amazon Code Whisperer, Github Copilot etc.) * Soft Skills: Excellent interpersonal and communication skills. Engage with stakeholders for analysis and implementation. Commitment to continuous learning and staying updated with advancements in the field of AI. Growth mindset: Demonstrate a growth mindset to understand clients' business processes and challenges. Experience in python and pyspark will be added advantage Preferred Technical And Professional Experience Experience: Proven experience in designing and delivering AI solutions, with a focus on foundation models, large language models, exposure to open source, or similar technologies. Experience in natural language processing (NLP) and text analytics is highly desirable. Understanding of machine learning and deep learning algorithms. Strong track record in scientific publications or open-source communities Experience in full AI project lifecycle, from research and prototyping to deployment in production environments

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description We are looking for an enthusiastic AI/ML Developer with 3-5 years of experience to design, develop, and deploy AI/ML solutions. The ideal candidate is passionate about AI, skilled in machine learning, deep learning, and MLOps, and eager to work on cutting-edge projects. Key Skills & Experience: Programming: Python (TensorFlow, PyTorch, Scikit-learn, Pandas). Machine Learning: Supervised, Unsupervised, Deep Learning, NLP, Computer Vision. Model Deployment: Flask, FastAPI, AWS SageMaker, Google Vertex AI, Azure ML. MLOps & Cloud: Docker, Kubernetes, MLflow, Kubeflow, CI/CD pipelines. Big Data & Databases: Spark, Dask, SQL, NoSQL (PostgreSQL, MongoDB). Soft Skills: Strong analytical and problem-solving mindset. Passion for AI innovation and continuous learning. Excellent teamwork and communication abilities. Qualifications: Bachelor’s/Master’s in Computer Science, AI, Data Science, or related fields. AI/ML certifications are a plus. Career Level - IC4 Responsibilities We are looking for an enthusiastic AI/ML Developer with 3-5 years of experience to design, develop, and deploy AI/ML solutions. The ideal candidate is passionate about AI, skilled in machine learning, deep learning, and MLOps, and eager to work on cutting-edge projects. Key Skills & Experience: Programming: Python (TensorFlow, PyTorch, Scikit-learn, Pandas). Machine Learning: Supervised, Unsupervised, Deep Learning, NLP, Computer Vision. Model Deployment: Flask, FastAPI, AWS SageMaker, Google Vertex AI, Azure ML. MLOps & Cloud: Docker, Kubernetes, MLflow, Kubeflow, CI/CD pipelines. Big Data & Databases: Spark, Dask, SQL, NoSQL (PostgreSQL, MongoDB). Soft Skills: Strong analytical and problem-solving mindset. Passion for AI innovation and continuous learning. Excellent teamwork and communication abilities. Qualifications: Bachelor’s/Master’s in Computer Science, AI, Data Science, or related fields. AI/ML certifications are a plus. Diversity & Inclusion: An Oracle career can span industries, roles, Countries and cultures, giving you the opportunity to flourish in new roles and innovate, while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. In order to nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. . Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, interview process, and in potential roles. to perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Description We are looking for an enthusiastic AI/ML Developer with 3-5 years of experience to design, develop, and deploy AI/ML solutions. The ideal candidate is passionate about AI, skilled in machine learning, deep learning, and MLOps, and eager to work on cutting-edge projects. Key Skills & Experience: Programming: Python (TensorFlow, PyTorch, Scikit-learn, Pandas). Machine Learning: Supervised, Unsupervised, Deep Learning, NLP, Computer Vision. Model Deployment: Flask, FastAPI, AWS SageMaker, Google Vertex AI, Azure ML. MLOps & Cloud: Docker, Kubernetes, MLflow, Kubeflow, CI/CD pipelines. Big Data & Databases: Spark, Dask, SQL, NoSQL (PostgreSQL, MongoDB). Soft Skills: Strong analytical and problem-solving mindset. Passion for AI innovation and continuous learning. Excellent teamwork and communication abilities. Qualifications: Bachelor’s/Master’s in Computer Science, AI, Data Science, or related fields. AI/ML certifications are a plus. Career Level - IC4 Responsibilities We are looking for an enthusiastic AI/ML Developer with 3-5 years of experience to design, develop, and deploy AI/ML solutions. The ideal candidate is passionate about AI, skilled in machine learning, deep learning, and MLOps, and eager to work on cutting-edge projects. Key Skills & Experience: Programming: Python (TensorFlow, PyTorch, Scikit-learn, Pandas). Machine Learning: Supervised, Unsupervised, Deep Learning, NLP, Computer Vision. Model Deployment: Flask, FastAPI, AWS SageMaker, Google Vertex AI, Azure ML. MLOps & Cloud: Docker, Kubernetes, MLflow, Kubeflow, CI/CD pipelines. Big Data & Databases: Spark, Dask, SQL, NoSQL (PostgreSQL, MongoDB). Soft Skills: Strong analytical and problem-solving mindset. Passion for AI innovation and continuous learning. Excellent teamwork and communication abilities. Qualifications: Bachelor’s/Master’s in Computer Science, AI, Data Science, or related fields. AI/ML certifications are a plus. Diversity & Inclusion: An Oracle career can span industries, roles, Countries and cultures, giving you the opportunity to flourish in new roles and innovate, while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. In order to nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. . Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, interview process, and in potential roles. to perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Job Description We are looking for an enthusiastic AI/ML Developer with 3-5 years of experience to design, develop, and deploy AI/ML solutions. The ideal candidate is passionate about AI, skilled in machine learning, deep learning, and MLOps, and eager to work on cutting-edge projects. Key Skills & Experience: Programming: Python (TensorFlow, PyTorch, Scikit-learn, Pandas). Machine Learning: Supervised, Unsupervised, Deep Learning, NLP, Computer Vision. Model Deployment: Flask, FastAPI, AWS SageMaker, Google Vertex AI, Azure ML. MLOps & Cloud: Docker, Kubernetes, MLflow, Kubeflow, CI/CD pipelines. Big Data & Databases: Spark, Dask, SQL, NoSQL (PostgreSQL, MongoDB). Soft Skills: Strong analytical and problem-solving mindset. Passion for AI innovation and continuous learning. Excellent teamwork and communication abilities. Qualifications: Bachelor’s/Master’s in Computer Science, AI, Data Science, or related fields. AI/ML certifications are a plus. Career Level - IC4 Responsibilities We are looking for an enthusiastic AI/ML Developer with 3-5 years of experience to design, develop, and deploy AI/ML solutions. The ideal candidate is passionate about AI, skilled in machine learning, deep learning, and MLOps, and eager to work on cutting-edge projects. Key Skills & Experience: Programming: Python (TensorFlow, PyTorch, Scikit-learn, Pandas). Machine Learning: Supervised, Unsupervised, Deep Learning, NLP, Computer Vision. Model Deployment: Flask, FastAPI, AWS SageMaker, Google Vertex AI, Azure ML. MLOps & Cloud: Docker, Kubernetes, MLflow, Kubeflow, CI/CD pipelines. Big Data & Databases: Spark, Dask, SQL, NoSQL (PostgreSQL, MongoDB). Soft Skills: Strong analytical and problem-solving mindset. Passion for AI innovation and continuous learning. Excellent teamwork and communication abilities. Qualifications: Bachelor’s/Master’s in Computer Science, AI, Data Science, or related fields. AI/ML certifications are a plus. Diversity & Inclusion: An Oracle career can span industries, roles, Countries and cultures, giving you the opportunity to flourish in new roles and innovate, while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. In order to nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. . Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, interview process, and in potential roles. to perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description We are looking for an enthusiastic AI/ML Developer with 3-5 years of experience to design, develop, and deploy AI/ML solutions. The ideal candidate is passionate about AI, skilled in machine learning, deep learning, and MLOps, and eager to work on cutting-edge projects. Key Skills & Experience: Programming: Python (TensorFlow, PyTorch, Scikit-learn, Pandas). Machine Learning: Supervised, Unsupervised, Deep Learning, NLP, Computer Vision. Model Deployment: Flask, FastAPI, AWS SageMaker, Google Vertex AI, Azure ML. MLOps & Cloud: Docker, Kubernetes, MLflow, Kubeflow, CI/CD pipelines. Big Data & Databases: Spark, Dask, SQL, NoSQL (PostgreSQL, MongoDB). Soft Skills: Strong analytical and problem-solving mindset. Passion for AI innovation and continuous learning. Excellent teamwork and communication abilities. Qualifications: Bachelor’s/Master’s in Computer Science, AI, Data Science, or related fields. AI/ML certifications are a plus. Career Level - IC4 Responsibilities We are looking for an enthusiastic AI/ML Developer with 3-5 years of experience to design, develop, and deploy AI/ML solutions. The ideal candidate is passionate about AI, skilled in machine learning, deep learning, and MLOps, and eager to work on cutting-edge projects. Key Skills & Experience: Programming: Python (TensorFlow, PyTorch, Scikit-learn, Pandas). Machine Learning: Supervised, Unsupervised, Deep Learning, NLP, Computer Vision. Model Deployment: Flask, FastAPI, AWS SageMaker, Google Vertex AI, Azure ML. MLOps & Cloud: Docker, Kubernetes, MLflow, Kubeflow, CI/CD pipelines. Big Data & Databases: Spark, Dask, SQL, NoSQL (PostgreSQL, MongoDB). Soft Skills: Strong analytical and problem-solving mindset. Passion for AI innovation and continuous learning. Excellent teamwork and communication abilities. Qualifications: Bachelor’s/Master’s in Computer Science, AI, Data Science, or related fields. AI/ML certifications are a plus. Diversity & Inclusion: An Oracle career can span industries, roles, Countries and cultures, giving you the opportunity to flourish in new roles and innovate, while blending work life in. Oracle has thrived through 40+ years of change by innovating and operating with integrity while delivering for the top companies in almost every industry. In order to nurture the talent that makes this happen, we are committed to an inclusive culture that celebrates and values diverse insights and perspectives, a workforce that inspires thought leadership and innovation. . Oracle offers a highly competitive suite of Employee Benefits designed on the principles of parity, consistency, and affordability. The overall package includes certain core elements such as Medical, Life Insurance, access to Retirement Planning, and much more. We also encourage our employees to engage in the culture of giving back to the communities where we live and do business. At Oracle, we believe that innovation starts with diversity and inclusion and to create the future we need talent from various backgrounds, perspectives, and abilities. We ensure that individuals with disabilities are provided reasonable accommodation to successfully participate in the job application, interview process, and in potential roles. to perform crucial job functions. That’s why we’re committed to creating a workforce where all individuals can do their best work. It’s when everyone’s voice is heard and valued that we’re inspired to go beyond what’s been done before. About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 month ago

Apply

0 years

6 - 8 Lacs

Hyderābād

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Senior Principal Consultant - Databricks Architect! In this role, the Databricks Architect is responsible for providing technical direction and lead a group of one or more developer to address a goal. Responsibilities Architect and design solutions to meet functional and non-functional requirements. Create and review architecture and solution design artifacts. Evangelize re-use through the implementation of shared assets. Enforce adherence to architectural standards/principles, global product-specific guidelines, usability design standards, etc. Proactively guide engineering methodologies, standards, and leading practices. Guidance of engineering staff and reviews of as-built configurations during the construction phase. Provide insight and direction on roles and responsibilities required for solution operations. Identify, communicate and mitigate Risks, Assumptions, Issues, and Decisions throughout the full lifecycle. Considers the art of the possible, compares various architectural options based on feasibility and impact, and proposes actionable plans. Demonstrate strong analytical and technical problem-solving skills. Ability to analyze and operate at various levels of abstraction. Ability to balance what is strategically right with what is practically realistic . Growing the Data Engineering business by helping customers identify opportunities to deliver improved business outcomes, designing and driving the implementation of those solutions. Growing & retaining the Data Engineering team with appropriate skills and experience to deliver high quality services to our customers. Supporting and developing our people, including learning & development, certification & career development plans Providing technical governance and oversight for solution design and implementation Should have technical foresight to understand new technology and advancement. Leading team in the definition of best practices & repeatable methodologies in Cloud Data Engineering, including Data Storage, ETL, Data Integration & Migration, Data Warehousing and Data Governance Should have Technical Experience in Azure, AWS & GCP Cloud Data Engineering services and solutions. Contributing to Sales & Pre-sales activities including proposals, pursuits, demonstrations, and proof of concept initiatives Evangelizing the Data Engineering service offerings to both internal and external stakeholders Development of Whitepapers, blogs, webinars and other though leadership material Development of Go-to-Market and Service Offering definitions for Data Engineering Working with Learning & Development teams to establish appropriate learning & certification paths for their domain. Expand the business within existing accounts and help clients, by building and sustaining strategic executive relationships, doubling up as their trusted business technology advisor. Position differentiated and custom solutions to clients, based on the market trends, specific needs of the clients and the supporting business cases. Build new Data capabilities, solutions, assets, accelerators, and team competencies. Manage multiple opportunities through the entire business cycle simultaneously, working with cross-functional teams as necessary. Qualifications we seek in you! Minimum qualifications Excellent technical architecture skills, enabling the creation of future-proof, complex global solutions. Excellent interpersonal communication and organizational skills are required to operate as a leading member of global, distributed teams that deliver quality services and solutions. Ability to rapidly gain knowledge of the organizational structure of the firm to facilitate work with groups outside of the immediate technical team. Knowledge and experience in IT methodologies and life cycles that will be used. Familiar with solution implementation/management, service/operations management, etc. Leadership skills can inspire others and persuade. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience Experience in a solution architecture role using service and hosting solutions such as private/public cloud IaaS, PaaS, and SaaS platforms. Experience in architecting and designing technical solutions for cloud-centric solutions based on industry standards using IaaS, PaaS, and SaaS capabilities. Must have strong hands-on experience on various cloud services like ADF/Lambda, ADLS/S3, Security, Monitoring, Governance Must have experience to design platform on Databricks. Hands-on Experience to design and build Databricks based solution on any cloud platform. Hands-on experience to design and build solution powered by DBT models and integrate with databricks . Must be very good designing End-to-End solution on cloud platform. Must have good knowledge of Data Engineering concept and related services of cloud. Must have good experience in Python and Spark. Must have good experience in setting up development best practices. Intermediate level knowledge is required for Data Modelling. Good to have knowledge of docker and Kubernetes. Experience with claims-based authentication (SAML/OAuth/OIDC), MFA, RBAC , SSO etc. Knowledge of cloud security controls including tenant isolation, encryption at rest, encryption in transit, key management, vulnerability assessments, application firewalls, SIEM, etc. Experience building and supporting mission-critical technology components with DR capabilities. Experience with multi-tier system and service design and development for large enterprises Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technologies. Exposure to infrastructure and application security technologies and approaches Familiarity with requirements gathering techniques. Preferred qualifications Must have designed the E2E architecture of unified data platform covering all the aspect of data lifecycle starting from Data Ingestion, Transformation, Serve and consumption. Must have excellent coding skills either Python or Scala, preferably Python. Must have experience in Data Engineering domain with total Must have designed and implemented at least 2-3 project end-to-end in Databricks. Must have experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o SQL Endpoint – Photon engine o Unity Catalog o Databricks workflows orchestration o Security management o Platform governance o Data Security Must have knowledge of new features available in Databricks and its implications along with various possible use-case. Must have followed various architectural principles to design best suited per problem. Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have strong understanding of Data warehousing and various governance and security standards around Databricks. Must have knowledge of cluster optimization and its integration with various cloud services. Must have good understanding to create complex data pipeline. Must be strong in SQL and sprak-sql . Must have strong performance optimization skills to improve efficiency and reduce cost. Must have worked on designing both Batch and streaming data pipeline. Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB /DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test. Must have strong communication skills and have worked with cross platform team. Must have great attitude towards learning new skills and upskilling the existing skills. Responsible to set best practices around Databricks CI/CD. Must understand composable architecture to take fullest advantage of Databricks capabilities. Good to have Rest API knowledge. Good to have understanding around cost distribution. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Experience around DevSecOps including docker and Kubernetes. Software development full lifecycle methodologies, patterns, frameworks, libraries, and tools Knowledge of programming and scripting languages such as JavaScript, PowerShell, Bash, SQL, Java , Python, etc. Experience with data ingestion technologies such as Azure Data Factory, SSIS, Pentaho, Alteryx Experience with visualization tools such as Tableau, Power BI Experience with machine learning tools such as mlFlow , Databricks AI/ML, Azure ML, AWS sagemaker , etc. Experience in distilling complex technical challenges to actionable decisions for stakeholders and guiding project teams by building consensus and mediating compromises when necessary. Experience coordinating the intersection of complex system dependencies and interactions Experience in solution delivery using common methodologies especially SAFe Agile but also Waterfall, Iterative, etc. Demonstrated knowledge of relevant industry trends and standards Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Senior Principal Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 1, 2025, 6:40:20 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 month ago

Apply

4.0 years

8 - 12 Lacs

Hyderābād

Remote

Job Title: AWS DevOps Engineer Location: Hyderabad, Ranchi Employment Type: Full-Time Experience: 4+ Years Department: Engineering / DevOps Job Overview: We are looking for an experienced AWS DevOps Engineer with a strong background in deploying and managing full-stack applications (Java, React, Angular, WordPress) and distributed/AI applications in production environments. You will architect and maintain scalable, secure, and automated DevOps pipelines and infrastructure for a range of technology stacks. Key Responsibilities: Design and maintain cloud-native infrastructure on AWS for Java-based backends, React/Angular frontends, WordPress sites, and AI/ML applications. Implement and maintain CI/CD pipelines for multi-language apps using Jenkins, GitHub Actions, GitLab CI, or AWS CodePipeline. Automate infrastructure provisioning using Terraform , CloudFormation , or similar IaC tools. Deploy and manage ElasticSearch clusters for logging, search, and analytics at scale. Set up and manage Dockerized and container-based workloads using ECS, EKS, or Fargate . Support AI/ML model deployment pipelines using S3, Lambda, SageMaker, or custom APIs. Optimize cloud usage and monitor performance, availability, and security across environments. Troubleshoot production issues, perform root cause analysis, and implement fixes. Work with development teams to implement DevOps best practices , enforce coding standards, and streamline delivery processes. Required Skills & Experience: 4+ years of hands-on DevOps experience with AWS cloud platform. Proven experience deploying and maintaining: Java Spring Boot applications React & Angular frontend apps WordPress (including plugins, themes, DB optimization) Distributed applications (microservices, APIs) ElasticSearch clusters in production AI/ML applications (TensorFlow, PyTorch, Hugging Face, or AWS SageMaker) Strong experience with CI/CD pipelines and version control (Git). Proficiency with Terraform , CloudFormation , or other IaC tools. Experience with containerization using Docker and orchestration via Kubernetes (EKS preferred) or ECS . Experience with Linux administration , shell scripting, and automation. Familiarity with monitoring & logging tools like CloudWatch, ELK Stack, Grafana, Prometheus, or DataDog. Nice to Have: AWS certifications (e.g., AWS Certified DevOps Engineer , Solutions Architect Associate ). Experience with CloudFront, WAF, Route 53 , and multi-region deployments . Familiarity with AI pipeline automation , model versioning, and MLOps practices. Experience managing WordPress multisite networks and headless WP setups. Understanding of security best practices , IAM policies , and compliance standards (e.g., GDPR, HIPAA). Soft Skills: Strong analytical, problem-solving, and debugging skills. Excellent communication and team collaboration skills. Ability to work independently, manage multiple projects, and meet tight deadlines. Passion for continuous improvement, automation, and DevOps culture. What We Offer: Competitive compensation and performance bonuses. Remote flexibility and flexible working hours. Opportunities for training, certifications, and attending conferences. Collaborative and tech-forward work environment. Health, dental, and wellness benefits. Job Type: Full-time Pay: ₹800,000.00 - ₹1,200,000.00 per year Benefits: Flexible schedule Work Location: In person

Posted 1 month ago

Apply

5.0 years

3 - 8 Lacs

Mohali

On-site

Job description Core Responsibilities Project Leadership: Own the end-to-end lifecycle of AI/ML projects—from scoping and prototyping to deployment and monitoring. Team Management: Lead, mentor, and grow a team of ML engineers, data scientists, and researchers. Model Development: Design, train, and evaluate machine learning and deep learning models for predictive analytics, NLP, CV, recommendation systems, etc. MLOps: Oversee model deployment, versioning, monitoring, and continuous training in production environments. Cross-Functional Collaboration: Work closely with data engineers, software developers, product managers, and business stakeholders. Research & Innovation: Evaluate cutting-edge AI/ML research and integrate suitable technologies into company products and platforms. Compliance & Ethics: Ensure responsible AI practices, addressing fairness, bias, interpretability, and compliance requirements. Technical Skills Programming: Expert in Python (NumPy, pandas, scikit-learn), and familiar with C++, R, or Java if needed. Frameworks: TensorFlow, PyTorch, Keras, Hugging Face Transformers. Data Tools: SQL, Spark, Kafka, Airflow. Cloud & DevOps: AWS/GCP/Azure, Docker, Kubernetes, MLflow, SageMaker, Vertex AI. Techniques: Supervised/unsupervised learning, deep learning, NLP, computer vision, reinforcement learning. MLOps: CI/CD for ML, monitoring (Prometheus, Grafana), feature stores, model drift detection. Typical Background 5+ years experience in AI/ML development. 2+ years in a leadership or team lead role. Job Type: Full-time Benefits: Provident Fund Location Type: In-person Schedule: Day shift Monday to Friday Morning shift Work Location: In person Speak with the employer +91 8146237069 Application Deadline: 05/07/2025

Posted 1 month ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

AI/ML Engineer/Manager: Location – Pune Experience: 6+ years Notice period – Immediate to 30 days. Key Responsibilities: Lead the development of machine learning PoCs and demos using structured/tabular data for use cases such as forecasting, risk scoring, churn prediction, and optimization. Collaborate with sales engineering teams to understand client needs and present ML solutions during pre-sales calls and technical workshops. Build ML workflows using tools such as SageMaker, Azure ML, or Databricks ML and manage training, tuning, evaluation, and model packaging. Apply supervised, unsupervised, and semi-supervised techniques such as XGBoost, CatBoost, k-Means, PCA, time-series models, and more. Work with data engineering teams to define data ingestion, preprocessing, and feature engineering pipelines using Python, Spark, and cloud-native tools. Package and document ML assets so they can be scaled or transitioned into delivery teams post-demo. Stay current with best practices in ML explainability , model performance monitoring , and MLOps practices. Participate in internal knowledge sharing, tooling evaluation, and continuous improvement of lab processes. Qualifications: 8+ years of experience developing and deploying classical machine learning models in production or PoC environments. Strong hands-on experience with Python , pandas , scikit-learn , and ML libraries such as XGBoost, CatBoost , LightGBM, etc. Familiarity with cloud-based ML environments such as AWS SageMaker , Azure ML , or Databricks . Solid understanding of feature engineering, model tuning, cross-validation, and error analysis . Experience with unsupervised learning , clustering, anomaly detection, and dimensionality reduction techniques. Comfortable presenting models and insights to technical and non-technical stakeholders during pre-sales engagements. Working knowledge of MLOps concepts , including model versioning, deployment automation, and drift detection. Interested candidates shall apply or share resumes at kanika.garg@austere.co.in.

Posted 1 month ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Minimum of (3+) years of experience in AI-based application development. Fine-tune pre-existing models to improve performance and accuracy. Experience with TensorFlow or PyTorch, Scikit-learn, or similar ML frameworks and familiarity with APIs like OpenAI or vertex AI Experience with NLP tools and libraries (e.g., NLTK, SpaCy, GPT, BERT). Implement frameworks like LangChain, Anthropics Constitutional AI, OpenAIs, Hugging Face, and Prompt Engineering techniques to build robust and scalable AI applications. Evaluate and analyze RAG solution and Utilise the best-in-class LLM to define customer experience solutions (Fine tune Large Language models (LLM)). Architect and develop advanced generative AI solutions leveraging state-of-the-art language models (LLMs) such as GPT, LLaMA, PaLM, BLOOM, and others. Strong understanding and experience with open-source multimodal LLM models to customize and create solutions. Explore and implement cutting-edge techniques like Few-Shot Learning, Reinforcement Learning, Multi-Task Learning, and Transfer Learning for AI model training and fine-tuning. Proficiency in data preprocessing, feature engineering, and data visualization using tools like Pandas, NumPy, and Matplotlib. Optimize model performance through experimentation, hyperparameter tuning, and advanced optimization techniques. Proficiency in Python with the ability to get hands-on with coding at a deep level. Develop and maintain APIs using Python's FastAPI, Flask, or Django for integrating AI capabilities into various systems. Ability to write optimized and high-performing scripts on relational databases (e.g., MySQL, PostgreSQL) or non-relational database (e.g., MongoDB or Cassandra) Enthusiasm for continuous learning and professional developement in AI and leated technologies. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Knowledge of cloud services like AWS, Google Cloud, or Azure. Proficiency with version control systems, especially Git. Familiarity with data pre-processing techniques and pipeline development for Al model training. Experience with deploying models using Docker, Kubernetes Experience with AWS Bedrock, and Sagemaker is a plus Strong problem-solving skills with the ability to translate complex business problems into Al solutions.

Posted 1 month ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Company Description HCL Software is a division of HCL Technologies (HCL) that operates its primary software business. It develops, markets, sells, and supports over 20 product families in the areas of Customer Experience, Digital Solutions, Security & Automation and DevOps. Its mission is to drive ultimate customer success with their IT investments through relentless innovation of its products. As Data Scientist/Researchers you will be responsible for leveraging the established architecture disciplines to help ensure that business strategies align with powerful capabilities of AI/ML to achieve business objectives consistently and cost effectively. We are looking for experience of 10 to 15 years only. Please apply only if you match the experience level Location- Bangalore and Noida. Please share CV to monica_sharma@hcl-software.com with the below details: Total Experience- Current CTC- Expected CTC- Notice Period- Main Responsibilities Propose solutions and strategies to tackle business challenges in AI/ML product Operationalizing and architecting ML/AI solutions Qualifications, Education and Experience: Phd, Masters or Bachelor’s degree in Computer Science, Electrical Engineering, Statistics, Mathematics with at least 10+ years in ML/AI Ability to prototype statistical analysis and modeling algorithms and apply these algorithms for data driven solutions to problems in new domains. Proven ability to rationalize disparate data sources and the ability to intuit the large picture within a dataset Experience in solving client's analytics problems and effectively communicating results and methodologies Software development skills in one or more scripting language preferably python, and common ML tools (Weka, R, RapidMiner, KNIME, scikit, AzureML, Sagemaker, ModelBuilder etc.) Familiarity with ML tools and packages like OpenNLP, Caffe, TensorFlow etc. Knowledge of MLOps pipeline Knowledge of machine learning and data mining techniques in one or more areas of statistical modeling methods Anomaly detection Regression, classification, clustering Deep learning Survival analysis Similarity and recommendation Forecasting Strong oral and written communication skills, including presentation skills Strong problem solving and troubleshooting skills

Posted 1 month ago

Apply

5.0 years

30 - 35 Lacs

Baner, Pune, Maharashtra

On-site

Job Title: Lead Artificial Intelligence Engineer Location: Baner - Pune (Hybrid) Experience: 5+ Years Shift Time: 2:00 PM to 11:00 PM IST Notice Period: Immediate to 15 days Job Overview: We are looking for a Lead Artificial Intelligence Engineer to spearhead the design, development, and deployment of cutting-edge AI solutions. This role will lead a team of AI Engineers and Data Scientists while collaborating cross-functionally to drive business impact through intelligent, data-driven systems. The ideal candidate has deep expertise in NLP, LLMs, and real-world AI implementation, with a strong foundation in software engineering and MLOps. Key Responsibilities: Team Leadership: Inspire and lead a team of AI Engineers and Data Scientists, fostering innovation, collaboration, and continuous development. Project Ownership: Take end-to-end responsibility for AI projects—from ideation and architecture to development, deployment, and monitoring. AI Development: Design and implement AI/ML models using Python , Scikit-Learn , TensorFlow , PyTorch , and other modern frameworks. NLP & LLMs: Build and optimize real-world applications leveraging Natural Language Processing and Large Language Models (LLMs) . MLOps & Engineering: Develop and manage end-to-end ML pipelines , integrating with CI/CD systems , using tools like Docker and cloud platforms (AWS, GCP, or Azure). System Architecture: Collaborate with software and data engineers to define robust and scalable architectures. Cross-functional Collaboration: Work closely with product managers, business teams, and engineers to align AI initiatives with business goals. Risk Management: Identify project risks early, conduct root cause analyses, and implement mitigation strategies. Documentation & Standards: Ensure technical documentation and maintain coding best practices throughout the lifecycle. Mentorship & Growth: Coach junior team members, identify upskilling needs, and support their career development. Required Qualifications: Bachelor's or Master’s degree in Computer Science, Engineering, Data Science, AI, or a related field. 5+ years of hands-on experience in AI/ML or Data Science in real-world applications. Strong experience in Python and libraries like NumPy , Pandas , Scikit-learn , etc. Expertise in Natural Language Processing (preferred) or Computer Vision . Experience building and deploying AI systems in production. Proficiency in ML frameworks : PyTorch, TensorFlow. Strong understanding of Generative AI and practical experience with LLMs . Experience with web frameworks such as Flask, FastAPI, or Django. Cloud experience with AWS, GCP, or Azure . Familiarity with SQL/NoSQL , Git, CI/CD, and containerization tools like Docker. Solid grasp of software engineering principles and MLOps practices . Preferred Qualifications: Experience with Big Data tools like Apache Spark, Kafka, Kinesis. Familiarity with cloud ML platforms such as AWS SageMaker or GCP ML Engine. Exposure to data visualization tools such as Tableau or Power BI. Job Type: Full-time Pay: ₹3,000,000.00 - ₹3,500,000.00 per year Application Question(s): We are hiring for this position immediately. Are you available to join within 15 days? If not, please specify your official notice period or your last working day. Have you worked with Large Language Models (LLMs) like GPT or LLaMA in real projects? Which of the following ML frameworks have you used in production? (PyTorch or TensorFlow) Have you deployed ML models using Flask, FastAPI, or Django? Do you use Docker, Git, and CI/CD tools in your ML workflow? How many years of experience do you have as a Lead AI Engineer or in a similar leadership role in AI/ML? What is your current CTC? What is your expected CTC? Location: Baner, Pune, Maharashtra (Required) Work Location: In person

Posted 1 month ago

Apply

0.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Job Title: AI Infrastructure Engineer Experience: 8+ Years Location: Onsite ( Note: The selected candidate is required to relocate to Kovilpatti, Tamil Nadu for the initial three-month project training session . Post training, the candidate will be relocated to one of our onsite locations: Chennai, Hyderabad, or Pune , based on project allocation.) Job Summary: We are looking for an experienced AI Infrastructure Engineer to architect and manage scalable, secure, and high-performance infrastructure tailored for enterprise AI and ML applications. The ideal candidate will collaborate with data scientists, DevOps, and cybersecurity teams to build reliable platforms for efficient model development, training, and deployment. Key Responsibilities: Design and implement end-to-end AI infrastructure using cloud-native tools (Azure, AWS, GCP). Build secure and scalable compute environments with GPU/TPU acceleration for model training and inference. Develop and maintain CI/CD and MLOps pipelines for the AI/ML lifecycle. Optimize large-scale AI workloads using distributed computing and hardware-aware strategies. Manage containerized deployments using orchestration platforms like Kubernetes (AKS, EKS, GKE) and Docker. Ensure system reliability, monitoring, observability, and performance tuning for real-time inference services. Implement automated rollback, logging, and infrastructure monitoring tools. Collaborate with cybersecurity teams to enforce security, data privacy, and regulatory compliance. Technical Skills: Cloud Platforms: Azure Machine Learning, AWS SageMaker, GCP Vertex AI Infrastructure-as-Code: Terraform, ARM Templates, Bicep Containerization & Orchestration: Docker, Kubernetes (AKS, EKS, GKE) MLOps Tools: MLflow, Kubeflow, Azure DevOps, GitHub Actions GPU/TPU Acceleration: CUDA, NVIDIA Triton Inference Server Security & Compliance: TLS, IAM, RBAC, Azure Key Vault Performance: Endpoint scaling, latency optimization, model caching, and resource allocation Qualifications: Bachelor's or Master's in Computer Engineering, Cloud Architecture, or a related field Microsoft Certified: Azure Solutions Architect or DevOps Engineer Expert (preferred) Proven experience deploying and managing large-scale ML pipelines and AI workloads Strong understanding of infrastructure security, networking, and cloud-based AI environments Job Type: Full-time Pay: Up to ₹80,000.00 per month Ability to commute/relocate: Tamulinadu, Tamil Nadu: Reliably commute or willing to relocate with an employer-provided relocation package (Required) Application Question(s): Expected Salary in Annual (INR) Experience: AI Infrastructure Engineer : 8 years (Required) Work Location: In person

Posted 1 month ago

Apply

0.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu

Remote

Job Title: AI Infrastructure Engineer Experience: 8+ Years *Location: The selected candidate is required to work onsite at our Chennai location for the initial six-month project training and execution period. After the six months , the candidate will be offered remote opportunities.* Job Summary: We are looking for an experienced AI Infrastructure Engineer to architect and manage scalable, secure, and high-performance infrastructure tailored for enterprise AI and ML applications. The ideal candidate will collaborate with data scientists, DevOps, and cybersecurity teams to build reliable platforms for efficient model development, training, and deployment. Key Responsibilities: Design and implement end-to-end AI infrastructure using cloud-native tools (Azure, AWS, GCP). Build secure and scalable compute environments with GPU/TPU acceleration for model training and inference. Develop and maintain CI/CD and MLOps pipelines for the AI/ML lifecycle. Optimize large-scale AI workloads using distributed computing and hardware-aware strategies. Manage containerized deployments using orchestration platforms like Kubernetes (AKS, EKS, GKE) and Docker. Ensure system reliability, monitoring, observability, and performance tuning for real-time inference services. Implement automated rollback, logging, and infrastructure monitoring tools. Collaborate with cybersecurity teams to enforce security, data privacy, and regulatory compliance. Technical Skills: Cloud Platforms: Azure Machine Learning, AWS SageMaker, GCP Vertex AI Infrastructure-as-Code: Terraform, ARM Templates, Bicep Containerization & Orchestration: Docker, Kubernetes (AKS, EKS, GKE) MLOps Tools: MLflow, Kubeflow, Azure DevOps, GitHub Actions GPU/TPU Acceleration: CUDA, NVIDIA Triton Inference Server Security & Compliance: TLS, IAM, RBAC, Azure Key Vault Performance: Endpoint scaling, latency optimization, model caching, and resource allocation Qualifications: Bachelor's or Master's in Computer Engineering, Cloud Architecture, or a related field Microsoft Certified: Azure Solutions Architect or DevOps Engineer Expert (preferred) Proven experience deploying and managing large-scale ML pipelines and AI workloads Strong understanding of infrastructure security, networking, and cloud-based AI environments Job Type: Full-time Pay: Up to ₹80,000.00 per month Ability to commute/relocate: Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Expected Salary in Annual (INR) Experience: AI Infrastructure Engineer : 8 years (Required) Work Location: In person

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

Remote

Job Title: AI/ML Engineer Location: 100% Remote Job Type: Full-Time About the Role: We are seeking a highly skilled and motivated AI/ML Engineer to design, develop, and deploy cutting-edge ML models and data-driven solutions. You will work closely with data scientists, software engineers, and product teams to bring AI-powered products to life and scale them effectively. Key Responsibilities: Design, build, and optimize machine learning models for classification, regression, recommendation, and NLP tasks. Collaborate with data scientists to transform prototypes into scalable, production-ready models. Deploy, monitor, and maintain ML pipelines in production environments. Perform data preprocessing, feature engineering, and selection from structured and unstructured data. Implement model performance evaluation metrics and improve accuracy through iterative tuning. Work with cloud platforms (AWS, Azure, GCP) and MLOps tools to manage model lifecycle. Maintain clear documentation and collaborate cross-functionally across teams. Stay updated with the latest ML/AI research and technologies to continuously enhance our solutions. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 2–5 years of experience in ML model development and deployment. Proficient in Python and libraries such as scikit-learn, TensorFlow, PyTorch, pandas, NumPy, etc. Strong understanding of machine learning algorithms, statistical modeling, and data analysis. Experience with building and maintaining ML pipelines using tools like MLflow, Kubeflow, or Airflow. Familiarity with containerization (Docker), version control (Git), and CI/CD for ML models. Experience with cloud services such as AWS SageMaker, GCP Vertex AI, or Azure ML.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description We are seeking individuals with advanced expertise in Machine Learning (ML) to join our dynamic team. As an Applied AI ML Lead within our Corporate Sector, you will play a pivotal role in developing machine learning and deep learning solutions, and experimenting with state of the art models. You will contribute to our innovative projects and drive the future of machine learning at AI Technologies. You will use your knowledge of ML tools and algorithms to deliver the right solution. You will be a part of an innovative team, working closely with our product owners, data engineers, and software engineers to build new AI/ML solutions and productionize them. You will also mentor other AI engineers and scientists while fostering a culture of continuous learning and technical excellence. We are looking for someone with a passion for data, ML, and programming, who can build ML solutions at-scale with a hands-on approach with detailed technical acumen. Job Responsibilities Serve as a subject matter expert on a wide range of machine learning techniques and optimizations. Provide in-depth knowledge of machine learning algorithms, frameworks, and techniques. Enhance machine learning workflows through advanced proficiency in large language models (LLMs) and related techniques. Conduct experiments using the latest machine learning technologies, analyze results, and tune models. Engage in hands-on coding to transition experimental results into production solutions by collaborating with the engineering team, owning end-to-end code development in Python for both proof of concept/experimentation and production-ready solutions. Optimize system accuracy and performance by identifying and resolving inefficiencies and bottlenecks, collaborating with product and engineering teams to deliver tailored, science and technology-driven solutions. Integrate Generative AI within the machine learning platform using state-of-the-art techniques, driving decisions that influence product design, application functionality, and technical operations and processes Required Qualifications, Capabilities, And Skills Formal training or certification on AI/ML concepts and 5+ years applied experience Hans on experience in programming languages, particularly Python. Manage to apply data science and machine learning techniques to address business challenges. Strong background in Natural Language Processing (NLP) and Large Language Models (LLMs). Expertise in deep learning frameworks such as PyTorch or TensorFlow, and advanced applied ML areas like GPU optimization, finetuning, embedding models, inferencing, prompt engineering, evaluation, and RAG (Similarity Search). Manage to complete tasks and projects independently with minimal supervision, with a passion for detail and follow-through. Excellent communication skills, team player, and demonstrated leadership in collaborating effectively with engineers, product managers, and other ML practitioners Preferred Qualifications, Capabilities, And Skills Exposure with Ray, MLFlow, and/or other distributed training frameworks. MS and/or PhD in Computer Science, Machine Learning, or a related field. Understanding of Search/Ranking, Recommender systems, Graph techniques, and other advanced methodologies. Familiar in Reinforcement Learning or Meta Learning. Understanding of Large Language Model (LLM) techniques, including Agents, Planning, Reasoning, and other related methods. Exposure building and deploying ML models on cloud platforms such as AWS and AWS tools like Sagemaker, EKS, etc. About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We offer a competitive total rewards package including base salary determined based on the role, experience, skill set and location. Those in eligible roles may receive commission-based pay and/or discretionary incentive compensation, paid in the form of cash and/or forfeitable equity, awarded in recognition of individual achievements and contributions. We also offer a range of benefits and programs to meet employee needs, based on eligibility. These benefits include comprehensive health care coverage, on-site health and wellness centers, a retirement savings plan, backup childcare, tuition reimbursement, mental health support, financial coaching and more. Additional details about total compensation and benefits will be provided during the hiring process. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. JPMorgan Chase & Co. is an Equal Opportunity Employer, including Disability/Veterans About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title - DevOps Engineer Responsibilities Designing and building infrastructure to support our AWS services and infrastructure. Creating and utilizing tools to monitor our applications and services in the cloud, including system health indicators, trend identification, and anomaly detection. Working with development teams to help engineer scalable, reliable, and resilient software running in the cloud. Analyzing and monitoring performance bottlenecks and key metrics to optimize software and system performance. Providing analytics and forecasts for cloud capacity, troubleshooting analysis, and uptime. Qualifications Bachelor’s degree in CS or ECE. 3+ years of experience in a DevOps Engineer role. Strong experience in public cloud platforms (AWS, Azure, GCP), provisioning and managing core services (S3, EC2, RDS, EKS, ECR, EFS, SSM, IAM, etc.), with a focus on cost governance and budget optimization Proven skills in containerization and orchestration using Docker, Kubernetes (EKS/AKS/GKE), and Helm Familiarity with monitoring and observability tools such as SigNoz, OpenTelemetry, Prometheus, and Grafana Adept at designing and maintaining CI/CD pipelines using Jenkins, GitHub Actions, GitLab CI, Bitbucket pipelines, Nexus/Artifactory, and SonarQube to accelerate and secure releases Proficient in infrastructure-as-code and GitOps provisioning with technologies like Terraform, OpenTofu, Crossplane, AWS CloudFormation, Pulumi, Ansible, and ArgoCD Experience with cloud storage solutions and databases: S3, Glacier, PostgreSQL, MySQL, DynamoDB, Snowflake, Redshift Strong communication skills, translating complex technical and analytical content into clear, actionable insights for stakeholders Preferred Qualifications Experience with advanced IaC and GitOps frameworks: OpenTofu, Crossplane, Pulumi, Ansible, and ArgoCD Exposure to serverless and event-driven workflows (AWS Lambda, Step Functions) Experience operationalizing AI/ML workloads and intelligent agents (AWS SageMaker, Amazon Bedrock, canary/blue-green deployments, drift detection) Background in cost governance and budget management for cloud infrastructure Familiarity with Linux system administration at scale

Posted 1 month ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Mandatory Skills - Gen-AI, Data Science, Python, RAG and Cloud (AWS/Azure) Secondary - Machine Learning, Deep Learning, ChatGPT, Langchain, Prompt, vector stores, RAG, llama, Computer vision, Deep learning, Machine learning, OCR, Transformer, regression, forecasting, classification, hyper parameter tunning, MLOps, Inference, Model training, Model Deployment. Job Description More than 6 years of experience in Data Engineering, Data Science and AI / ML domain Excellent understanding of machine learning techniques and algorithms, such as GPTs, CNN, RNN, k-NN, Naive Bayes, SVM, Decision Forests, etc. Experience using business intelligence tools (e.g. Tableau, PowerBI) and data frameworks (e.g. Hadoop) Experience in Cloud native skills. Knowledge of SQL and Python; familiarity with Scala, Java or C++ is an asset Analytical mind and business acumen and Strong math skills (e.g. statistics, algebra) Experience with common data science toolkits, such as TensorFlow, KERAs, PyTorch, PANDAs, Microsoft CNTK, NumPy etc. Deep expertise in at least one of these is highly desirable. Experience with NLP, NLG and Large Language Models like – BERT, LLaMa, LaMDA, GPT, BLOOM, PaLM, DALL-E, etc. Great communication and presentation skills. Should have experience in working in a fast-paced team culture. Experience with AIML and Big Data technologies like – AWS SageMaker, Azure Cognitive Services, Google Colab, Jupyter Notebook, Hadoop, PySpark, HIVE, AWS EMR etc. Experience with NoSQL databases, such as MongoDB, Cassandra, HBase, Vector databases Good understanding of applied statistics skills, such as distributions, statistical testing, regression, etc. Should be a data-oriented person with analytical mind and business acumen.

Posted 1 month ago

Apply

10.0 - 15.0 years

35 - 45 Lacs

Bengaluru

Work from Office

Title: AI/ML Architect Location: Onsite Bangalore Experience: 10+ years Position Summary: We are seeking an experienced AI/ML Architect to lead the design and deployment of scalable AI solutions. This role requires a strong blend of technical depth, systems thinking, and leadership in machine learning , computer vision , and real-time analytics . You will drive the architecture for edge, on-prem, and cloud-based AI systems, integrating 3rd party data sources, sensor and vision data to enable predictive, prescriptive, and autonomous operations across industrial environments. Key Responsibilities: Architecture & Strategy Define the end-to-end architecture for AI/ML systems including time series forecasting , computer vision , and real-time classification . Design scalable ML pipelines (training, validation, deployment, retraining) using MLOps best practices. Architect hybrid deployment models supporting both cloud and edge inference for low-latency processing. Model Integration Guide the integration of ML models into the IIoT platform for real-time insights, alerting, and decision support. Support model fusion strategies combining disparate data sources, sensor streams with visual data (e.g., object detection + telemetry + 3rd party data ingestion). MLOps & Engineering Define and implement ML lifecycle tooling, including version control, CI/CD, experiment tracking, and drift detection. Ensure compliance, security, and auditability of deployed ML models. Collaboration & Leadership Collaborate with Data Scientists, ML Engineers, DevOps, Platform, and Product teams to align AI efforts with business goals. Mentor engineering and data teams in AI system design, optimization, and deployment strategies. Stay ahead of AI research and industrial best practices; evaluate and recommend emerging technologies (e.g., LLMs, vision transformers, foundation models). Must-Have Qualifications: Bachelor’s or Master’s degree in Computer Science, AI/ML, Engineering, or a related technical field. 8+ years of experience in AI/ML development, with 3+ years in architecting AI solutions at scale. Deep understanding of ML frameworks (TensorFlow, PyTorch), time series modeling, and computer vision. Proven experience with object detection, facial recognition, intrusion detection , and anomaly detection in video or sensor environments. Experience in MLOps (MLflow, TFX, Kubeflow, SageMaker, etc.) and model deployment on Kubernetes/Docker . Proficiency in edge AI (Jetson, Coral TPU, OpenVINO) and cloud platforms (AWS, Azure, GCP). Nice-to-Have Skills: Knowledge of stream processing (Kafka, Spark Streaming, Flink). Familiarity with OT systems and IIoT protocols (MQTT, OPC-UA). Understanding of regulatory and safety compliance in AI/vision for industrial settings. Experience with charts, dashboards, and integrating AI with front-end systems (e.g., alerts, maps, command center UIs). Role Impact: As AI/ML Architect, you will shape the intelligence layer of our IIoT platform — enabling smarter, safer, and more efficient industrial operations through AI. You will bridge research and real-world impact , ensuring our AI stack is scalable, explainable, and production-grade from day one.

Posted 1 month ago

Apply

2.0 - 6.0 years

1 - 3 Lacs

Hyderābād

On-site

About the Role: Grade Level (for internal use): 09 The Team : As a member of the EDO, Collection Platforms & AI – Cognitive Engineering team you will build and maintain enterprise‐scale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will learn to design resilient, production-ready systems in an AWS-based ecosystem while leading by example in a highly engaging, global environment that encourages thoughtful risk-taking and self-initiative. What’s in it for you: Be part of a global company and deliver solutions at enterprise scale Collaborate with a hands-on, technically strong team (including leadership) Solve high-complexity, high-impact problems end-to-end Build, test, deploy, and maintain production-ready pipelines from ideation through deployment Responsibilities: Develop, deploy, and operate data extraction and automation pipelines in production Integrate and deploy machine learning models into those pipelines (e.g., inference services, batch scoring) Lead critical stages of the data engineering lifecycle, including: End-to-end delivery of complex extraction, transformation, and ML deployment projects Scaling and replicating pipelines on AWS (EKS, ECS, Lambda, S3, RDS) Designing and managing DataOps processes, including Celery/Redis task queues and Airflow orchestration Implementing robust CI/CD pipelines on Azure DevOps (build, test, deployment, rollback) Writing and maintaining comprehensive unit, integration, and end-to-end tests (pytest, coverage) Strengthen data quality, reliability, and observability through logging, metrics, and automated alerts Define and evolve platform standards and best practices for code, testing, and deployment Document architecture, processes, and runbooks to ensure reproducibility and smooth hand-offs Partner closely with data scientists, ML engineers, and product teams to align on requirements, SLAs, and delivery timelines Technical Requirements: Expert proficiency in Python, including building extraction libraries and RESTful APIs Hands-on experience with task queues and orchestration: Celery, Redis, Airflow Strong AWS expertise: EKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch Containerization and orchestration: Docker (mandatory), basic Kubernetes (preferred) Proven experience deploying ML models to production (e.g., SageMaker, ECS, Lambda endpoints) Proficient in writing tests (unit, integration, load) and enforcing high coverage Solid understanding of CI/CD practices and hands-on experience with Azure DevOps pipelines Familiarity with SQL and NoSQL stores for extracted data (e.g., PostgreSQL, MongoDB) Strong debugging, performance tuning, and automation skills Openness to evaluate and adopt emerging tools and languages as needed Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or related field 2-6 years of relevant experience in data engineering, automation, or ML deployment Prior contributions on GitHub, technical blogs, or open-source projects Basic familiarity with GenAI model integration (calling LLM or embedding APIs) What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com . S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here . ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 317425 Posted On: 2025-07-01 Location: Gurgaon, Haryana, India

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies