Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 7.0 years
9 - 14 Lacs
Bengaluru
Work from Office
The Data Scientist organization within the Data and Analytics division is responsible for designing and implementing a unified data strategy that enables the efficient, secure, and governed use of data across the organization. We aim to create a trusted and customer-centric data ecosystem, built on a foundation of data quality, security, and openness, and guided by the Thomson Reuters Trust Principles. Our team is dedicated to developing innovative data solutions that drive business value while upholding the highest standards of data management and ethics. About the role: Work with low to minimum supervision to solve business problems using data and analytics. Work in multiple business domain areas including Customer Experience and Service, Operations, Finance, Sales and Marketing. Work with various business stakeholders, to understand and document requirements. Design an analytical framework to provide insights into a business problem. Explore and visualize multiple data sets to understand data available for problem solving. Build end to end data pipelines to handle and process data at scale. Build machine learning models and/or statistical solutions. Build predictive models. Use Natural Language Processing to extract insight from text. Design database models (if a data mart or operational data store is required to aggregate data for modeling). Design visualizations and build dashboards in Tableau and/or PowerBI Extract business insights from the data and models. Present results to stakeholders (and tell stories using data) using power point and/or dashboards. Work collaboratively with other team members. About you: Overall 6+ years experience in technology roles. Must have a minimum of 2 years of experience working in the data science domain. Has used frameworks/libraries such as Scikit-learn, PyTorch, Keras, NLTK. Highly proficient in Python. Highly proficient in SQL. Experience with Tableau and/or PowerBI. Has worked with Amazon Web Services and Sagemaker. Ability to build data pipelines for data movement using tools such as Alteryx, GLUE, Informatica. Proficient in machine learning, statistical modelling, and data science techniques. Experience with one or more of the following types of business analytics applications: Predictive analytics for customer retention, cross sales and new customer acquisition. Pricing optimization models. Segmentation. Recommendation engines. Experience in one or more of the following business domains Customer Experience and Service. Finance. Operations. Good presentation skills and the ability to tell stories using data and PowerPoint/Dashboard Visualizations. Excellent organizational, analytical and problem-solving skills. Ability to communicate complex results in a simple and concise manner at all levels within the organization. Ability to excel in a fast-paced, startup-like environment. #LI-SS5 What’s in it For You Hybrid Work Model We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.
Posted 2 months ago
4.0 - 8.0 years
6 - 10 Lacs
Hyderabad
Work from Office
AI Opportunities with Soul AIs Expert Community! Are you an MLOps Engineer ready to take your expertise to the next levelSoul AI (by Deccan AI) is building an elite network of AI professionals, connecting top-tier talent with cutting-edge projects Why Join Above market-standard compensation Contract-based or freelance opportunities (2"“12 months) Work with industry leaders solving real AI challenges Flexible work locations- Remote | Onsite | Hyderabad/Bangalore Your Role: Architect and optimize ML infrastructure with Kubeflow, MLflow, SageMaker Pipelines Build CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD) Automate ML workflows (feature engineering, retraining, deployment) Scale ML models with Docker, Kubernetes, Airflow Ensure model observability, security, and cost optimization in cloud (AWS/GCP/Azure) Must-Have Skills: Proficiency in Python, TensorFlow, PyTorch, CI/CD pipelines Hands-on experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML) Expertise in monitoring tools (MLflow, Prometheus, Grafana) Knowledge of distributed data processing (Spark, Kafka) (BonusExperience in A/B testing, canary deployments, serverless ML) Next Steps: Register on Soul AIs website Get shortlisted & complete screening rounds Join our Expert Community and get matched with top AI projects Dont just find a job Build your future in AI with Soul AI!
Posted 2 months ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad
Work from Office
Position Overview: The Provider Technology Shared Services Engineering team is seeking an Software Engineer Lead Analyst for a Band 3 Contributor Career Track position. The Software Engineer Lead Analyst will play a critical role in system development within the broader Provider Technology Solutions and Engineering organization, significantly influencing Operations and Technology Product Management. This position will provide expertise in the engineering, design, installation and startup of automated systems, including a self-service onboarding kit that enables users to begin utilizing the solution within minutes. The solutions developed will be accessible to individuals with minimal technical skills and will require no additional coding, ensuring zero maintenance is needed. As a member of our team, you will operate within a high-performance, high-frequency enterprise technology environment. This role entails collaborating closely with IT management and staff to identify automated solutions that leverage existing resources with tailored configurations for each use case. The objective is to minimize redundancy in solutions while promoting an enterprise mindset focused on reusability and maintaining high standards, ultimately ensuring minimal future maintenance requirements. The Software Engineer Lead Analyst demonstrates significant creativity, foresight, and sound judgment in the conception, planning, and execution of initiatives. This role requires extensive professional knowledge and expertise to effectively advise functional leaders. Additionally, the Lead Analyst stays informed about the latest advancements in technology, including AI and machine learning, to enhance both existing and new automation solutions. These solutions are designed to optimize production costs while facilitating the addition or updating of features aimed at improving the overall software development lifecycle experiences. Responsibilities: Provide comprehensive consultation to business unit and IT management, as well as personnel, regarding all facets of application development, testing and automation solutions across diverse development, financial, operational, and computing environments. Offers leadership and strategic vision in architectural design and AI/ML guidance for the team. Perform comprehensive research to identify and recommend the most efficient, cost-effective, and scalable AI/ML automation solutions applicable throughout the Software Development Life Cycle (SDLC). This includes areas such as test data generation, code generation, test case generation, test script generation, root cause analysis, predictive analysis, etc., with the aim of enhancing the overall SDLC from development to production support. Ensures that engineering solutions are aligned with the overall Technology strategy while addressing all application requirements. Demonstrate industry-leading technical abilities that enhance product quality and optimize day-to-day operations. Understand how changes impact work upstream and downstream including various back-end and front-end architectural modules. Enhance personnel effectiveness using heat matrices to prioritize Quality and Development Engineering resources on high-impact interfaces while identifying areas of lesser focus. Proactively monitor and manage the design of supported automation solutions, ensuring scalability, stability, flexibility, simplicity, performance, availability, security, and capacity. Develop and implement automation solutions to improve engineering and operational efficiency. Troubleshoot and optimize automated solutions and related artifacts to ensure seamless execution in CI/CD pipelines and on local machines, minimizing software and package dependencies or conflicts to reduce cycle time. Execute on a strategy to hand over the automation solutions to every Agile teams for adoption and use within their areas of focus, requiring zero maintenance and minimal effort for any enhancements without delving into coding. Encouraging and building automated processes wherever possible. Recognized internally as a subject matter expert. Required Skills: Foundations in Machine Learning and Deep Learning: Understanding algorithms, neural networks, supervised and unsupervised learning, and deep learning frameworks like TensorFlow, PyTorch, and Keras. Generative Models: Knowledge of generative models such as GANs (Generative Adversarial Networks), VAEs (Variational Autoencoders), and Transformers. Natural Language Processing (NLP): Knowledge in NLP techniques and libraries (e.g., spaCy, NLTK, Hugging Face Transformers) for text generation tasks. Model Deployment: Experience with deploying models using services like TensorFlow Serving, TorchServe, or cloud-based solutions (e.g., AWS SageMaker, Google AI Platform). Basic understanding of implementing Prompt Engineering, Finetuning and RAG. Strong foundation and practical experience in programming languages, especially Python, within the context of AI/ML workflows, crucial for transitioning from traditional software development processes to optimized and innovative solutions that enhance market agility. Containerization and Orchestration: Experience with Docker and Kubernetes / OpenShift for containerization and orchestration of applications. CI/CD Pipelines: Knowledge of continuous integration and continuous deployment tools and practices. Security Best Practices: understanding of security principles and best practices for protecting data and systems, including IAM, encryption and network Security. Cloud Services: Familiarity with cloud platforms like AWS / Google Cloud / Azure for deploying and managing applications and AI models. Required Experience & Education: A Bachelor's degree in Computer Science or a related field is required. A minimum of 5 years of experience in Software Development, including 3 years of professional experience in AI and Machine Learning engineering. At least 3 years of experience in Agile methodologies is required. Familiarity with an onshore/offshore operational model is essential. Demonstrated experience in the architecture, design and development of large-scale enterprise application solutions is required. Desired Experience: Proficient in AI / ML practices and automation techniques. Experienced in programming languages and scripting, including Python, Shell, Bash, Groovy, Ansible and Docker. Providing coaching and guidance to team members. Location & Hours of Work: Full-time position, working 40 hours per week. Expected overlap with US hours as appropriate. Primarily based in the Innovation Hub in Hyderabad, India in a hybrid working model (3 days WFO and 2 days WAH)
Posted 2 months ago
7.0 - 12.0 years
18 - 20 Lacs
Hyderabad
Work from Office
We are Hiring Senior Python with Machine Learning Engineer Level 3 for a US based IT Company based in Hyderabad. Candidates with minimum 7 Years of experience in python and machine learning can apply. Job Title : Senior Python with Machine Learning Engineer Level 3 Location : Hyderabad Experience : 7+ Years CTC : 28 LPA - 30 LPA Working shift : Day shift Job Description: We are seeking a highly skilled and experienced Python Developer with a strong background in Machine Learning (ML) to join our advanced analytics team. In this Level 3 role, you will be responsible for designing, building, and deploying robust ML pipelines and solutions across real-time, batch, event-driven, and edge computing environments. The ideal candidate will have extensive hands-on experience in developing and deploying ML workflows using AWS SageMaker , building scalable APIs, and integrating ML models into production systems. This role also requires a strong grasp of the complete ML lifecycle and DevOps practices specific to ML projects. Key Responsibilities: Develop and deploy end-to-end ML pipelines for real-time, batch, event-triggered, and edge environments using Python Utilize AWS SageMaker to build, train, deploy, and monitor ML models using SageMaker Pipelines, MLflow, and Feature Store Build and maintain RESTful APIs for ML model serving using FastAPI , Flask , or Django Work with popular ML frameworks and tools such as scikit-learn , PyTorch , XGBoost , LightGBM , and MLflow Ensure best practices across the ML lifecycle: data preprocessing, model training, validation, deployment, and monitoring Implement CI/CD pipelines tailored for ML workflows using tools like Bitbucket , Jenkins , Nexus , and AUTOSYS Design and maintain ETL workflows for ML pipelines using PySpark , Kafka , AWS EMR , and serverless architectures Collaborate with cross-functional teams to align ML solutions with business objectives and deliver impactful results Required Skills & Experience: 5+ years of hands-on experience with Python for scripting and ML workflow development 4+ years of experience with AWS SageMaker for deploying ML models and pipelines 3+ years of API development experience using FastAPI , Flask , or Django 3+ years of experience with ML tools such as scikit-learn , PyTorch , XGBoost , LightGBM , and MLflow Strong understanding of the complete ML lifecycle: from model development to production monitoring Experience implementing CI/CD for ML using Bitbucket , Jenkins , Nexus , and AUTOSYS Proficient in building ETL processes for ML workflows using PySpark , Kafka , and AWS EMR Nice to Have: Experience with H2O.ai for advanced machine learning capabilities Familiarity with containerization using Docker and orchestration using Kubernetes For further assistance contact/whatsapp : 9354909517 or write to hema@gist.org.in
Posted 2 months ago
7.0 - 12.0 years
20 - 32 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Skills Required - Data Science, Machine Learning, Python, SQL, Marketing Analytics Domain (Preferred), AWS Sagemaker (Preferred) Manager = 7+ Years of Relevant Experience UPTO = 31.50 LPA Required Candidate profile WFO / Both Side Cabs Mumbai & Bangalore Location WhatsApp Resume to Sunny - 8219742465 ( DONT CALL ) & Mention DATA SCIENCE - Manager
Posted 2 months ago
4.0 - 8.0 years
14 - 22 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Skills Required - Data Science, Machine Learning, Python, SQL, Marketing Analytics Domain (Preferred), AWS Sagemaker (Preferred) TL = 4+ Years of Relevant Experience UPTO = 16.30 LPA AM = 5+ Years of Relevant Experience UPTO = 22.30 LPA Required Candidate profile WFO / Both Side Cabs Mumbai & Bangalore Location WhatsApp Resume to Sunny - 8219742465 ( DONT CALL ) & Mention DATA SCIENCE - TL / AM
Posted 2 months ago
2.0 - 5.0 years
7 - 11 Lacs
Hyderabad
Work from Office
About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSECI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Role Title: Business Analytics Associate Analyst Position Summary: The Business Analysis Associate Analyst will work closely with business stakeholders, IT teams, and subject matter experts in the Global Healthcare Innovation Hub team to gather, analyze, and document business requirements for GenAI capabilities. They will translate these requirements into functional and non-functional specifications for use by IT teams in the development and integration of AI-related solutions. A minimum of two year of experience working as a business analyst or a similar role with knowledge of testing is required. Experience in GenAI and deep learning technologies being desirable. Liaise with business stakeholders, IT teams, and subject matter experts to gather, analyze, and document GenAI-related business requirements, translating them into functional and non-functional specifications for use by IT teams in the development of AI technology solutions. Collaborate with project teams to design, develop, and implement IT solutions focusing on GenAI capabilities, aligning with the organizations objectives and industry best practices. Act as a bridge between business users and IT developers, ensuring that GenAI development efforts are consistent with the business requirements and strategic goals of the organization. From a QA specific side, support the user acceptance testing process for AI-related solutions, helping to identify defects, track them to resolution, and ensure the final deliverables meet the agreed-upon requirements. Participate in ongoing monitoring and measurement of GenAI technology solutions effectiveness, providing insights and suggestions for continuous improvement. Develop and maintain documentation, such as flowcharts, use-cases, data diagrams, and user manuals, to support business stakeholders in understanding the AI features and their usage. Assist in creating project plans, timelines, and resource allocation related to GenAI initiatives, ensuring projects progress on schedule and within budget. Employ strong analytical and conceptual thinking skills to help stakeholders formulate GenAI-related requirements through interviews, workshops, and other methods. Perform testing of AI models (QA) developed as part of business requirement. Experience Required: 2+ years of experience in business, system analysis and testing, with a focus on GenAI, preferably in healthcare or a related industry. Experience Desired: Proficiency in GenAI-related business analysis methodologies, including requirements gathering, process modeling, and data analysis techniques. Strong knowledge of software development lifecycle (SDLC) methodologies for AI-related technologies, such as Agile, Scrum, or Waterfall. Experience working with IT project management tools and knowledge of AI frameworks. Experience in Jira, Confluence is a must. Proficient in common office productivity tools, such as Microsoft Office Suite, Visio, and experience with cloud services like AWS Sagemaker and Azure ML is added advantage. Technical knowledge of programming languages like C/C++, Java, Python, and understanding of database systems, IT infrastructure, and GPU optimization for AI workloads are added advantages. Excellent communication, interpersonal, and stakeholder management skills with a focus on GenAI capabilities. Relevant certifications, such as Certified Business Analysis Professional (CBAP), or AI-related credentials are a plus. Ability to design GenAI features and define user stories based on business requirements. Good understanding of the software delivery lifecycle for AI-related technologies and experience creating detailed reports and presentations. Education and Training Required: Degree in Computer Science, Artificial Intelligence, or a related field. Location & Hours of Work: Full-time position, working 40 hours per week. Expected overlap with US hours as appropriate. Primarily based in the Innovation Hub in Hyderabad, India, with flexibility to work remotely as required. Equal Opportunity Statement: Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 2 months ago
4.0 - 8.0 years
9 - 14 Lacs
Hyderabad
Work from Office
About Evernorth: Evernorth Health Services, a division of The Cigna Group (NYSECI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Role Title: Software Engineering Senior Analyst Position Summary: The AI CoE team is seeking Gen AI Fullstack Software Engineers to work on the AI ecosystem. The GenAI Fullstack Engineer will be responsible for designing, implementing, and deploying scalable and efficient AI solutions with a focus on privacy, security, and fairness. Key components include Angular and React JS based user interfaces, a robust set of Application Programming Interfaces (APIs) enabling a wide range of platform integration capabilities, a flexible Machine Learning (ML) document processing pipeline and integration with various Robotic Process Automation (RPA) platforms. This role will work directly with business partners and other IT team members to understand desired system requirements and to deliver effective solutions within the Agile methodology and participate in all phases of the development and system support life cycle. Primary responsibilities are to build an AI Platform(s) leveraging the LLM’s and other Gen AI Capabilities and provide design solution for portal integrations, enhancement, and automation requests. The ideal candidate is a technologist that brings a fresh perspective and passion to solve complex functional and technical challenges in a fast-paced and team-oriented environment. Full stack development including triage, design, coding and implementation. Build enterprise grade AI solutions with focus on privacy, security, fairness. Perform code reviews with scrum teams to approve for Production deployment. Conduct research to identify new solutions and methods to fulfill diverse and evolving business needs Establish/Improve/Maintain Proactive monitoring and management of supported assets assuring performance, availability, security, and capacity Maintains a strong and collaborative relationship with delivery partners and business stakeholders. Experience Required: This position requires a highly technical, hands-on, motivated and collaborative individual with exceptional communication skills, proven experience working with diverse teams of technical architects, business users and IT areas on all phases of the software development life-cycle 4+ years of total experience. 3+ years of experience in Fullstack development (Frontend, Backend and Cloud) 1+ years experience in DevOps. 1+ year in developing Gen AI Solutions and its integration with other systems. Experience Desired: ### Front-End Development Skills 1. HTML/CSS/JavaScriptProficiency in the core technologies for building web interfaces. 2. Front-End FrameworksKnowledge of frameworks like React, Angular, or Vue.js for creating interactive and responsive user interfaces. 3. UI/UX DesignBasic understanding of user experience and user interface design principles. ### Back-End Development Skills 1. Server-Side LanguagesProficiency in server-side languages such as Python, Node.js. 2. APIsExperience in building and consuming RESTful, Flask / Fast API / GraphQL APIs. 3. Database ManagementKnowledge of SQL and NoSQL databases, including MySQL, PostgreSQL, MongoDB, etc. ### DevOps and Cloud Skills 1. Containerization and OrchestrationExperience with Docker and Kubernetes / OpenShift for containerization and orchestration of applications. 2. CI/CD PipelinesKnowledge of continuous integration and continuous deployment tools and practices. 3. Security Best Practicesunderstanding of security principles and best practices for protecting data and systems, including IAM, encryption and network Security. 4. Cloud ServicesFamiliarity with cloud platforms like AWS, Google Cloud, or Azure for deploying and managing applications and AI models. ### AI and Machine Learning Skills (Good to have) 1. Foundations in Machine Learning and Deep LearningUnderstanding algorithms, neural networks, supervised and unsupervised learning, and deep learning frameworks like TensorFlow, PyTorch, and Keras. 2. Generative ModelsKnowledge of generative models such as GANs (Generative Adversarial Networks), VAEs (Variational Autoencoders), and Transformers. 3. Natural Language Processing (NLP)Knowledge in NLP techniques and libraries (e.g., spaCy, NLTK, Hugging Face Transformers) for text generation tasks. 4. Model DeploymentExperience with deploying models using services like TensorFlow Serving, TorchServe, or cloud-based solutions (e.g., AWS SageMaker, Google AI Platform). 5. Basic understanding of implementing Prompt Engineering, Finetuning and RAG. ### Additional Skills 1. Version ControlProficiency with Git and version control workflows. 2. Software Development PracticesUnderstanding of agile methodologies, testing, and code review practices. Education and Training Required: Degree in Computer Science, Artificial Intelligence, or a related field. Location & Hours of Work: Full-time position, working 40 hours per week. Expected overlap with US hours as appropriate. Primarily based in the Innovation Hub in Hyderabad, India, with flexibility to work remotely as required. Equal Opportunity Statement: Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 2 months ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad, Bengaluru
Work from Office
Your future duties and responsibilities: Skill: pgvector,Vertex AI, FastAPI, Flask, Kubernetes Develops and optimizes AI applications for production, ensuring seamless integration with enterprise systems and front-end applications. Builds scalable API layers and microservices using FastAPI, Flask, Docker, and Kubernetes to serve AI models in real-world environments Implements and maintains AI pipelines with MLOps best practices, leveraging tools like Azure ML, Databricks, AWS SageMaker, and Vertex AI Ensures high availability, reliability, and performance of AI systems through rigorous testing, monitoring, and optimization Works with agentic frameworks such as LangChain, LangGraph, and AutoGen to build adaptive AI agents and workflows Experience with GCP, AWS, or Azure - utilizing services such as Vertex AI, Bedrock, or Azure Open AI model endpoints Hands on experience with vector databases such as pgvector, Milvus, Azure Search, AWS OpenSearch, and embedding models such as Ada, Titan, etc. Collaborates with architects and scientists to transition AI models from research to fully functional, high-performance production systems. Skills: Azure Search Flask Kubernetes
Posted 2 months ago
5.0 - 8.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Backend Developer Responsibilities & Skills Position Title Backend Developer Position Type Full time permanent Location Bengaluru, India Company Description Privaini is the pioneer of privacy risk management for companies and their entire business networks. Privaini offers a unique "outside-in approach", empowering companies to gain a comprehensive understanding of both internal and external privacy risks. It provides actionable insights using a data-driven, systematic, and automated approach to proactively address reputation and legal risks related to data privacy. Privaini generates AI-powered privacy profile and privacy score for a company from externally observable privacy, corporate, regulatory, historical events, and security data. Without the need for time-consuming questionnaires or installing any software, Privaini creates standardized privacy views of companies from externally observable information. Then Privaini builds a privacy risk posture for every business partner in the company's business network and continuously monitors each one. Our platform provides actionable insights that privacy & risk teams can readily implement. Be part of an exciting team of researchers, developers, and data scientists focused on the mission of building transparency in data privacy risks for companies and their business networks. Key Responsibilities Strong Python, Flask, REST API, and NoSQL skills. Familiarity with Docker is a plus. AWS Developer Associate certification is required. AWS Professional Certification is preferred. Architect, build, and maintain secure, scalable backend services on AWS platforms. Utilize core AWS services like Lambda, DynamoDB, API Gateways, and serverless technologies. Design and deliver RESTful APIs using Python Flask framework. Leverage NoSQL databases and design efficient data models for large user bases. Integrate with web services APIs and external systems. Apply AWS Sagemaker for machine learning and analytics (optional but preferred). Collaborate effectively with diverse teams (business analysts, data scientists, etc.). Troubleshoot and resolve technical issues within distributed systems. Employ Agile methodologies (JIRA, Git) and adhere to best practices. Continuously learn and adapt to new technologies and industry standards. Qualifications A bachelors degree in computer science, information technology or any relevant disciplines is required. A masters degree is preferred. At least 6 years of development experience, with 5+ years of experience in AWS. Must have demonstrated skills in planning, designing, developing, architecting, and implementing applications. Additional Information At Privaini Software India Private Limited, we value diversity and always treat all employees and job applicants based on merit, qualifications, competence, and talent. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Posted 2 months ago
11.0 - 21.0 years
18 - 32 Lacs
Bengaluru
Work from Office
Mandatory Skills Strong proficiency in Generative AI, Large Language Models (LLMs), deep learning, agentic frameworks, and RAG setup. Experience in designing and implementing machine learning models using scikit-learn and TensorFlow. Hands-on expertise with AI/ML frameworks such as Hugging Face, LangChain, LangGraph, and PyTorch. Cloud AI services experience, particularly in AWS SageMaker and AWS Bedrock. MLOps & DevOps: Knowledge of data pipeline setup, Apache Airflow, CI/CD, and containerization (Docker, Kubernetes). API Development: Ability to develop and maintain APIs following RESTful principles. Technical proficiency in Elastic, Python, YAML, and system integrations. Nice to Have skills - Experience with Observability, Ansible, Terraform, Git, Microservices, AIOps, and scripting in Python. Familiarity with AI cloud services such as Azure OpenAI and Google Vertex AI.
Posted 2 months ago
12.0 - 17.0 years
15 - 20 Lacs
Pune, Bengaluru
Hybrid
Tech Architect AWS AI (Anthropic) Experience: - 12+ years of total IT experience, with a minimum of 8 years in AI/ML architecture and solution development. Strong hands-on expertise in designing and building GenAI solutions using AWS services such as Amazon Bedrock, SageMaker, and Anthropic Claude models. Role Overview:- The Tech Architect AWS AI (Anthropic) will be responsible for translating AI solution requirements into scalable and secure AWS-native architectures. This role combines architectural leadership with hands-on technical depth in GenAI model integration, data pipelines, and deployment using Amazon Bedrock and Claude models. The ideal candidate will bridge the gap between strategic AI vision and engineering execution while ensuring alignment with enterprise cloud and security standards. Key Responsibilities: - Design robust, scalable architectures for GenAI use cases using Amazon Bedrock and Anthropic Claude. Lead architectural decisions involving model orchestration, prompt optimization, RAG pipelines, and API integration. Define best practices for implementing AI workflows using SageMaker, Lambda, API Gateway, and Step Functions. Review and validate implementation approaches with tech leads and developers; ensure alignment with architecture blueprints. Contribute to client proposals, solution pitch decks, and technical sections of RFP/RFI responses. Ensure AI solutions meet enterprise requirements for security, privacy, compliance, and performance. Collaborate with cloud infrastructure, data engineering, and DevOps teams to ensure seamless deployment and monitoring. Stay updated on AWS Bedrock advancements, Claude model improvements, and best practices for GenAI governance. Required Skills and Competencies: - Deep hands-on experience with Amazon Bedrock, Claude (Anthropic), Amazon Titan, and embedding-based workflows. Proficient in Python and cloud-native API development; experienced with JSON, RESTful integrations, and serverless orchestration. Strong understanding of SageMaker (model training, tuning, pipelines), real-time inference, and deployment strategies. Knowledge of RAG architectures, vector search (e.g., OpenSearch, Pinecone), and prompt engineering techniques. Expertise in IAM, encryption, access control, and responsible AI principles for secure AI deployments. Ability to create and communicate high-quality architectural diagrams and technical documentation. Desirable Qualifications: AWS Certified Machine Learning Specialty and/or AWS Certified Solutions Architect Professional. Familiarity with LangChain, Haystack, Semantic Kernel in AWS context. Experience with enterprise-grade GenAI use cases such as intelligent search, document summarization, conversational AI, and code copilots. Exposure to integrating third-party model APIs and services available via AWS Marketplace. Soft Skills: Strong articulation and technical storytelling capabilities for client and executive conversations. Proven leadership in cross-functional project environments with globally distributed teams. Analytical mindset with a focus on delivering reliable, maintainable, and performant AI solutions. Self-driven, curious, and continuously exploring innovations in GenAI and AWS services. Our Offering: Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture.
Posted 2 months ago
1.0 - 5.0 years
27 - 32 Lacs
Karnataka
Work from Office
As a global leader in cybersecurity, CrowdStrike protects the people, processes and technologies that drive modern organizations Since 2011, our mission hasnt changed "” were here to stop breaches, and weve redefined modern security with the worlds most advanced AI-native platform We work on large scale distributed systems, processing almost 3 trillion events per day We have 3.44 PB of RAM deployed across our fleet of C* servers and this traffic is growing daily Our customers span all industries, and they count on CrowdStrike to keep their businesses running, their communities safe and their lives moving forward Were also a mission-driven company We cultivate a culture that gives every CrowdStriker both the flexibility and autonomy to own their careers Were always looking to add talented CrowdStrikers to the team who have limitless passion, a relentless focus on innovation and a fanatical commitment to our customers, our community and each other Ready to join a mission that mattersThe future of cybersecurity starts with you. About The Role The charter of the Data + ML Platform team is to harness all the data that is ingested and cataloged within the Data LakeHouse for exploration, insights, model development, ML model development lifecycle, ML engineering, and Insights Activation This team is situated within the larger Data Platform group, which serves as one of the core pillars of our company We processdata at a truly immense scale The data sets we process are composed of various facets including telemetry data, associated metadata, IT asset information, contextual formation about threat exposure, and many more These facets comprise the overall data platform, which is currently over 200 PB and maintained in a hyper scale Data Lakehouse. We are seeking a strategic and technically savvy leader to head our Data and ML Platform team As the head, you will be responsible for defining and building our ML Experimentation Platform from the ground up, while scaling our data and ML infrastructure to support various roles including Data Platform Engineers, Data Scientists, and Threat Analysts Your key responsibilities will involve overseeing the design, implementation, and maintenance of scalable ML pipelines for data preparation, cataloging, feature engineering, model training, model serving, and in-field model performance monitoring These efforts will directly influence critical business decisions In this role, you'll foster a production-focused culture that effectively bridgesthe gap between model development and operational success Furthermore, you'll be at the forefront of spearheading our ongoing Generative AI investments The ideal candidate for this position will combine strategic vision with hands-on technical expertise in machine learning and data infrastructure, driving innovation and excellence across our data and ML initiatives We are building this team with ownership at Bengaluru, India, this leader will help us boot strap the entire site, starting with this team. What You'll Do Strategic Leadership Define the vision, strategy and roadmap for the organizations data and ML platform to align with critical business goals. Help design, build, and facilitate adoption of a modern Data+ML platform Stay updated on emerging technologies and trends in data platform, ML Ops and AI/ML Team Management Build a team of Data and ML Platform engineers from a small footprint across multiple geographies Foster a culture of innovation and strong customer commitment for both internal and external stakeholders Platform Development Oversee the design and implementation of a platform containing data pipelines, feature stores and model deployment frameworks. Develop and enhance ML Ops practices to streamline model lifecycle Management from development to production. Data Governance Institute best practices for data security, compliance and quality to ensure safe and secure use of AI/ML models. Stakeholder engagement Partner with product, engineering and data science teams to understand requirements and translate them into platform capabilities. Communicate progress and impact to executive leadership and key stakeholders. Operational Excellence Establish SLI/SLO metrics for Observability of the Data and ML Platform along with alerting to ensure a high level of reliability and performance. Drive continuous improvement through data-driven insights and operational metrics. What You'll Need S 10+ years experience in data engineering, ML platform development, or related fields with at least 5 years in a leadership role. Familiarity with typical machine learning algorithms from an engineering perspective; familiarity with supervised / unsupervised approacheshow, why and when labeled data is created and used. Knowledge of ML Platform tools like Jupyter Notebooks, NVidia Workbench, MLFlow, Ray, Vertex AI, etc. Experience with modern ML Ops platforms such as MLFLow, Kubeflow or SageMaker preferred.Experience in data platform product(s) and frameworks like Apache Spark, Flink or comparable tools in GCP and orchestration technologies (e.g Kubernetes, Airflow) Experience with Apache Iceberg is a plus. Deep understanding of machine learning workflows, including model training, deployment and monitoring. Familiarity with data visualization tools and techniques. Experience with boot strapping new teams and growing them to make a large impact. Experience operating as a site lead within a company will be a bonus. Exceptional interpersonal and communication skills Work with stakeholders across multiple teams and synthesize their needs into software interfaces and processes. Benefits Of Working At CrowdStrike Remote-friendly and flexible work culture Market leader in compensation and equity awards Comprehensive physical and mental wellness programs Competitive vacation and holidays for recharge Paid parental and adoption leaves Professional development opportunities for all employees regardless of level or role s, geographic neighbourhood groups and volunteer opportunities to build connections Vibrant office culture with world class amenities Great Place to Work Certified„¢ across the globe CrowdStrike is proud to be an equal opportunity employer We are committed to fostering a culture of belonging where everyone is valued for who they are and empowered to succeed We support veterans and individuals with disabilities through our affirmative action program. CrowdStrike is committed to providing equal employment opportunity for all employees and applicants for employment The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, ethnicity, religion, sex (including pregnancy or pregnancy-related medical conditions), sexual orientation, gender identity, marital or family status, veteran status, age, national origin, ancestry, physical disability (including HIV and AIDS), mental disability, medical condition, genetic information, membership or activity in a local human rights commission, status with regard to public assistance, or any other characteristic protected by law We base all employment decisions--including recruitment, selection, training, compensation, benefits, discipline, promotions, transfers, lay-offs, return from lay-off, terminations and social/recreational programs--on valid job requirements. If you need assistance accessing or reviewing the information on this website or need help submitting an application for employment or requesting an accommodation, please contact us at recruiting@crowdstrike.com for further assistance. Show more Show less
Posted 2 months ago
1.0 - 5.0 years
8 - 12 Lacs
Mumbai
Work from Office
Skills: Python, TensorFlow, PyTorch, Natural Language Processing (NLP), Computer Vision, AWS SageMaker, Machine Learning Model Deployment, Scikit-learn, Sr AI/ML Developer Experience8-10 Years LocationThane / Vikhroli, Mumbai About The Role We are seeking an experienced AI/ML Developer with 8-10 years of hands-on experience in building and deploying machine learning models at scale The ideal candidate will have a strong background in Python, PySpark, Hadoop, and Hive, along with a deep understanding of machine learning model building, analysis, and optimization As part of our innovative AI/ML team, you will contribute to cutting-edge projects and collaborate with cross-functional teams to deliver impactful solutions. Key Responsibilities Model DevelopmentDesign, build, and deploy machine learning models, utilizing advanced techniques to ensure optimal performance. Data ProcessingWork with large-scale data processing frameworks such as PySpark and Hadoop to efficiently handle big data. Model Analysis and OptimizationAnalyze model performance and fine-tune models to improve accuracy, scalability, and speed. CollaborationWork closely with data scientists, analysts, and engineers to understand business requirements and integrate AI/ML solutions. Version ControlUtilize Git for version control to ensure proper management and documentation of model code and workflows. Project ManagementParticipate in sprint planning, track progress, and report on key milestones using JIRA. Notebook WorkflowsUse Jupyter/Notebook for interactive development and presentation of model outputs, insights, and results. TensorFlowImplement and deploy deep learning models using TensorFlow, optimizing them for real-world applications. Key Skills ProgrammingStrong proficiency in Python, with experience in data manipulation and libraries like NumPy, Pandas, and SciPy. Big Data TechnologiesHands-on experience with PySpark, Hadoop, and Hive for managing large datasets. Model DevelopmentExpertise in machine learning model building, training, validation, and deployment using frameworks like TensorFlow, Scikit-learn, etc. Deep LearningFamiliarity with TensorFlow for building and optimizing deep learning models. Version Control and CollaborationProficiency in Git for source control and JIRA for project tracking. Problem-SolvingStrong analytical skills to troubleshoot, debug, and optimize models and workflows. Experience And Qualifications Experience8-10 years in AI/ML development with significant exposure to machine learning and deep learning techniques. EducationBachelor's or Master's degree in Computer Science, Data Science, or a related field. KnowledgeDeep understanding of AI/ML algorithms, model evaluation techniques, and data manipulation. Preferred Qualifications Hands-on experience with cloud platforms like AWS, GCP, or Azure. Familiarity with containerization tools like Docker and Kubernetes. Experience in deploying models into production environments. Show more Show less
Posted 2 months ago
1.0 - 5.0 years
8 - 12 Lacs
Thane
Work from Office
Skills: Python, TensorFlow, PyTorch, Natural Language Processing (NLP), Computer Vision, AWS SageMaker, Machine Learning Model Deployment, Scikit-learn, Sr AI/ML Developer Experience8-10 Years LocationThane / Vikhroli, Mumbai About The Role We are seeking an experienced AI/ML Developer with 8-10 years of hands-on experience in building and deploying machine learning models at scale The ideal candidate will have a strong background in Python, PySpark, Hadoop, and Hive, along with a deep understanding of machine learning model building, analysis, and optimization As part of our innovative AI/ML team, you will contribute to cutting-edge projects and collaborate with cross-functional teams to deliver impactful solutions. Key Responsibilities Model DevelopmentDesign, build, and deploy machine learning models, utilizing advanced techniques to ensure optimal performance. Data ProcessingWork with large-scale data processing frameworks such as PySpark and Hadoop to efficiently handle big data. Model Analysis and OptimizationAnalyze model performance and fine-tune models to improve accuracy, scalability, and speed. CollaborationWork closely with data scientists, analysts, and engineers to understand business requirements and integrate AI/ML solutions. Version ControlUtilize Git for version control to ensure proper management and documentation of model code and workflows. Project ManagementParticipate in sprint planning, track progress, and report on key milestones using JIRA. Notebook WorkflowsUse Jupyter/Notebook for interactive development and presentation of model outputs, insights, and results. TensorFlowImplement and deploy deep learning models using TensorFlow, optimizing them for real-world applications. Key Skills ProgrammingStrong proficiency in Python, with experience in data manipulation and libraries like NumPy, Pandas, and SciPy. Big Data TechnologiesHands-on experience with PySpark, Hadoop, and Hive for managing large datasets. Model DevelopmentExpertise in machine learning model building, training, validation, and deployment using frameworks like TensorFlow, Scikit-learn, etc. Deep LearningFamiliarity with TensorFlow for building and optimizing deep learning models. Version Control and CollaborationProficiency in Git for source control and JIRA for project tracking. Problem-SolvingStrong analytical skills to troubleshoot, debug, and optimize models and workflows. Experience And Qualifications Experience8-10 years in AI/ML development with significant exposure to machine learning and deep learning techniques. EducationBachelor's or Master's degree in Computer Science, Data Science, or a related field. KnowledgeDeep understanding of AI/ML algorithms, model evaluation techniques, and data manipulation. Preferred Qualifications Hands-on experience with cloud platforms like AWS, GCP, or Azure. Familiarity with containerization tools like Docker and Kubernetes. Experience in deploying models into production environments. Show more Show less
Posted 2 months ago
6.0 - 9.0 years
27 - 42 Lacs
Chennai
Work from Office
Role : MLOps Engineer Location - Kochi Mode of Interview - In Person Data - 14th June 2025 (Saturday) Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.
Posted 2 months ago
6.0 - 9.0 years
27 - 42 Lacs
Chennai
Work from Office
Description - External Role – AIML Data Scientist Location : Kochi Mode of Interview - In Person Date : 14th June 2025 (Saturday) Job Description: 1. Be a hands on problem solver with consultative approach, who can apply Machine Learning & Deep Learning algorithms to solve business challenges a. Use the knowledge of wide variety of AI/ML techniques and algorithms to find what combinations of these techniques can best solve the problem b. Improve Model accuracy to deliver greater business impact c. Estimate business impact due to deployment of model 2. Work with the domain/customer teams to understand business context , data dictionaries and apply relevant Deep Learning solution for the given business challenge 3. Working with tools and scripts for sufficiently pre-processing the data & feature engineering for model development – Python / R / SQL / Cloud data pipelines 4. Design , develop & deploy Deep learning models using Tensorflow / Pytorch 5. Experience in using Deep learning models with text, speech, image and video data a. Design & Develop NLP models for Text Classification, Custom Entity Recognition, Relationship extraction, Text Summarization, Topic Modeling, Reasoning over Knowledge Graphs, Semantic Search using NLP tools like Spacy and opensource Tensorflow, Pytorch, etc b. Design and develop Image recognition & video analysis models using Deep learning algorithms and open source tools like OpenCV c. Knowledge of State of the art Deep learning algorithms 6. Optimize and tune Deep Learnings model for best possible accuracy 7. Use visualization tools/modules to be able to explore and analyze outcomes & for Model validation eg: using Power BI / Tableau 8. Work with application teams, in deploying models on cloud as a service or on-prem a. Deployment of models in Test / Control framework for tracking b. Build CI/CD pipelines for ML model deployment 9. Integrating AI&ML models with other applications using REST APIs and other connector technologies 10. Constantly upskill and update with the latest techniques and best practices. Write white papers and create demonstrable assets to summarize the AIML work and its impact. Technology/Subject Matter Expertise Sufficient expertise in machine learning, mathematical and statistical sciences Use of versioning & Collaborative tools like Git / Github Good understanding of landscape of AI solutions – cloud, GPU based compute, data security and privacy, API gateways, microservices based architecture, big data ingestion, storage and processing, CUDA Programming Develop prototype level ideas into a solution that can scale to industrial grade strength Ability to quantify & estimate the impact of ML models Softskills Profile Curiosity to think in fresh and unique ways with the intent of breaking new ground. Must have the ability to share, explain and “sell” their thoughts, processes, ideas and opinions, even outside their own span of control Ability to think ahead, and anticipate the needs for solving the problem will be important Ability to communicate key messages effectively, and articulate strong opinions in large forums Desirable Experience: Keen contributor to open source communities, and communities like Kaggle Ability to process Huge amount of Data using Pyspark/Hadoop Development & Application of Reinforcement Learning Knowledge of Optimization/Genetic Algorithms Operationalizing Deep learning model for a customer and understanding nuances of scaling such models in real scenarios Optimize and tune deep learning model for best possible accuracy Understanding of stream data processing, RPA, edge computing, AR/VR etc Appreciation of digital ethics, data privacy will be important Experience of working with AI & Cognitive services platforms like Azure ML, IBM Watson, AWS Sagemaker, Google Cloud will all be a big plus Experience in platforms like Data robot, Cognitive scale, H2O.AI etc will all be a big plus
Posted 2 months ago
3.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Certified AWS Consultant with hands-on experience in AI platform development projects Experience in setting up, maintaining, and developing cloud infrastructure Proficiency with Infrastructure as Code tools such as CloudFormation and/or Terraform Strong knowledge of AWS services including SageMaker , S3 , EC2 , etc. In-depth proficiency in at least one high-level programming language ( Python , Java , etc.) Good understanding of data analytics use cases and AI/ML technologies Primary Skills SageMaker,S3, EC2 CloudFormation / Terraform Java/Python AI/ML
Posted 2 months ago
5.0 - 9.0 years
5 - 9 Lacs
Udupi, Karnataka, India
On-site
As part of our digital transformation efforts, we are building an advanced Intelligent Virtual Assistant (IVA) to enhance customer interactions, and we are seeking a talented and motivated Machine Learning (ML) / Artificial Intelligence (AI) Engineer to join our dynamic team full time to support this effort. Responsibilities: Design, develop, and implement AI-driven chatbots and IVAs to streamline customer interactions. Work on conversational AI platforms to create a seamless customer experience, with a focus on natural language processing (NLP), intent recognition, and sentiment analysis. Collaborate with cross-functional teams, including product managers and customer support, to translate business requirements into technical solutions. Build, train, and fine-tune machine learning models to enhance IVA capabilities and ensure high accuracy in responses. Continuously optimize models based on user feedback and data-driven insights to improve performance. Integrate IVA/chat solutions with internal systems such as CRM and backend databases. Ensure scalability, robustness, and security of IVA/chat solutions in compliance with industry standards. Participate in code reviews, testing, and deployment of AI solutions to ensure high quality and reliability. Requirements: Bachelors or Master s degree in Computer Science, Data Science, AI/ML, or a related field. 5+ years of experience in developing IVA/chatbots, conversational AI, or similar AI-driven systems using AWS services. Expert in using Amazon Lex, Amazon Polly, AWS lambda, AWS connect. AWS Bedrock experience with Sage maker will have added advantage. Solid understanding of API integration and experience working with RESTful services. Strong problem-solving skills, attention to detail, and ability to work independently and in a team. Excellent communication skills, both written and verbal. Experience in financial services or fintech projects. Knowledge of data security best practices and compliance requirements in the financial sector. This role requires significant overlap with CST time zone to ensure real-time collaboration with the team and stakeholders based in the U.S. Flexibility is key, and applicants should be available for meetings and work during U.S. business hours.
Posted 2 months ago
5.0 - 9.0 years
5 - 9 Lacs
Navi Mumbai, Maharashtra, India
On-site
As part of our digital transformation efforts, we are building an advanced Intelligent Virtual Assistant (IVA) to enhance customer interactions, and we are seeking a talented and motivated Machine Learning (ML) / Artificial Intelligence (AI) Engineer to join our dynamic team full time to support this effort. Responsibilities: Design, develop, and implement AI-driven chatbots and IVAs to streamline customer interactions. Work on conversational AI platforms to create a seamless customer experience, with a focus on natural language processing (NLP), intent recognition, and sentiment analysis. Collaborate with cross-functional teams, including product managers and customer support, to translate business requirements into technical solutions. Build, train, and fine-tune machine learning models to enhance IVA capabilities and ensure high accuracy in responses. Continuously optimize models based on user feedback and data-driven insights to improve performance. Integrate IVA/chat solutions with internal systems such as CRM and backend databases. Ensure scalability, robustness, and security of IVA/chat solutions in compliance with industry standards. Participate in code reviews, testing, and deployment of AI solutions to ensure high quality and reliability. Requirements: Bachelors or Master s degree in Computer Science, Data Science, AI/ML, or a related field. 5+ years of experience in developing IVA/chatbots, conversational AI, or similar AI-driven systems using AWS services. Expert in using Amazon Lex, Amazon Polly, AWS lambda, AWS connect. AWS Bedrock experience with Sage maker will have added advantage. Solid understanding of API integration and experience working with RESTful services. Strong problem-solving skills, attention to detail, and ability to work independently and in a team. Excellent communication skills, both written and verbal. Experience in financial services or fintech projects. Knowledge of data security best practices and compliance requirements in the financial sector. This role requires significant overlap with CST time zone to ensure real-time collaboration with the team and stakeholders based in the U.S. Flexibility is key, and applicants should be available for meetings and work during U.S. business hours.
Posted 2 months ago
0.0 - 4.0 years
0 - 4 Lacs
Navi Mumbai, Maharashtra, India
On-site
As an MLOps Engineer, you will be responsible for building and optimizing our machine learning infrastructure. You will leverage AWS services, containerization, and automation to streamline the deployment and monitoring of ML models. Your expertise in MLOps best practices, combined with your experience in managing large ML operations, will ensure our models are effectively deployed, managed, and maintained in production environments. Responsibilities: Machine Learning Operations (MLOps) & Deployment: Build, deploy, and manage ML models in production using AWS SageMaker, AWS Lambda, and other relevant AWS services. Develop automated pipelines for model training, validation, deployment, and monitoring to ensure high availability and low latency. Implement best practices for CI/CD in ML model deployment and manage versioning for seamless updates. Infrastructure Development & Optimization: Design and maintain scalable, efficient, and secure infrastructure for machine learning operations using AWS services (e.g., EC2, S3, SageMaker, ECR, ECS/EKS). Leverage containerization (Docker, Kubernetes) to deploy models as microservices, optimizing for scalability and resilience. Manage infrastructure as code (IaC) using tools like Terraform, AWS CloudFormation, or similar, ensuring reliable and reproducible environments. Model Monitoring & Maintenance: Set up monitoring, logging, and alerting for deployed models to track model performance, detect anomalies, and ensure uptime. Implement feedback loops to enable automated model retraining based on new data, ensuring models remain accurate and relevant over time. Troubleshoot and resolve issues in the ML pipeline and infrastructure to maintain seamless operations. AWS Connect & Integration: Integrate machine learning models with AWS Connect or similar services for customer interaction workflows, providing real-time insights and automation. Work closely with cross-functional teams to ensure models can be easily accessed and utilized by various applications and stakeholders. Collaboration & Stakeholder Engagement: Collaborate with data scientists, engineers, and DevOps teams to ensure alignment on project goals, data requirements, and model deployment standards. Provide technical guidance on MLOps best practices and educate team members on efficient ML deployment and monitoring processes. Actively participate in project planning, architecture decisions, and road mapping sessions to improve our ML infrastructure. Security & Compliance: Implement data security and compliance measures, ensuring all deployed models meet organizational and regulatory standards. Apply appropriate data encryption and manage access controls to safeguard sensitive information used in ML models. Requirements: Bachelor s or Master s degree in Computer Science, Engineering, or a related field. Experience: 5+ years of experience as an MLOps Engineer, DevOps Engineer, or similar role focused on machine learning deployment and operations. Strong expertise in AWS services, particularly SageMaker, EC2, S3, Lambda, and ECR/ECS/EKS. Proficiency in Python, including ML-focused libraries like scikit-learn and data manipulation libraries like pandas. Hands-on experience with containerization tools such as Docker and Kubernetes. Familiarity with infrastructure as code (IaC) tools such as Terraform or AWS CloudFormation. Experience with CI/CD pipelines, Git, and version control for ML model deployment. MLOps & Model Management: Proven experience in managing large ML projects, including model deployment, monitoring, and maintenance. AWS Connect & Integration: Understanding of AWS Connect for customer interactions and integration with ML models. Soft Skills: Strong communication and collaboration skills, with the ability to explain technical concepts to non-technical stakeholders. Experience with data streaming and message queues (e.g., Kafka, AWS Kinesis). Familiarity with monitoring tools like Prometheus, Grafana, or CloudWatch for tracking model performance. Knowledge of data governance, security, and compliance requirements related to ML data handling. Certification in AWS or relevant cloud platforms. Work Schedule: This role requires significant overlap with CST time zone to ensure real-time collaboration with the team and stakeholders based in the U.S. Flexibility is key, and applicants should be available for meetings and work during U.S. business hours.
Posted 2 months ago
5.0 - 10.0 years
20 - 35 Lacs
Hyderabad
Work from Office
Key Responsibilities: Design and develop machine learning and deep learning models for tasks such as text classification , entity recognition , sentiment analysis , and document intelligence . Build and optimize NLP pipelines using models like BERT , GPT , LayoutLM , and Transformer architectures . Implement and experiment with Generative AI techniques using frameworks like Hugging Face , OpenAI APIs , and PyTorch/TensorFlow . Perform data collection, web scraping , data cleaning , and feature engineering for structured and unstructured data sources. Deploy ML models using Docker , Kubernetes , and implement CI/CD pipelines for scalable and automated workflows. Use cloud services (e.g., GCP , Azure AI ) for model hosting, data storage, and compute resources. Collaborate with cross-functional teams to integrate ML models into production-grade applications. Apply MLOps practices including model versioning, monitoring, retraining pipelines, and reproducibility. Technical Skills: Languages & Libraries: Python, Pandas, NumPy, Scikit-learn, TensorFlow, Keras, PyTorch, OpenCV, Seaborn, XGBoost, NLTK, Hugging Face, BeautifulSoup, Selenium, Scrapy Modeling & NLP: Logistic Regression, Random Forest, SVM, CNN, RNN, Transformers, BERT, GPT, LLMs Tools & Platforms: Git, Docker, Kubernetes, CI/CD, Azure AI Services, GCP, Vertex AI Concepts: Machine Learning, Deep Learning, MLOps, Generative AI, Text Analytics, Predictive Analytics Databases & Querying: Basics of SQL Other Skills: Data Visualization (Matplotlib, Seaborn), Model Optimization, Version Control
Posted 2 months ago
3.0 - 5.0 years
5 - 7 Lacs
Pune
Work from Office
Role Overview Join our Pune AI Center of Excellence to drive software and product development in the AI space. As an AI/ML Engineer, youll build and ship core components of our AI products—owning end-to-end RAG pipelines, persona-driven fine-tuning, and scalable inference systems that power next-generation user experiences. Key Responsibilities Model Fine-Tuning & Persona Design Adapt and fine-tune open-source large language models (LLMs) (e.g. CodeLlama, StarCoder) to specific product domains. Define and implement “personas” (tone, knowledge scope, guardrails) at inference time to align with product requirements. RAG Architecture & Vector Search Build retrieval-augmented generation systems: ingest documents, compute embeddings, and serve with FAISS, Pinecone, or ChromaDB. Design semantic chunking strategies and optimize context-window management for product scalability. Software Pipeline & Product Integration Develop production-grade Python data pipelines (ETL) for real-time vector indexing and updates. Containerize model services in Docker/Kubernetes and integrate into CI/CD workflows for rapid iteration. Inference Optimization & Monitoring Quantize and benchmark models for CPU/GPU efficiency; implement dynamic batching and caching to meet product SLAs. Instrument monitoring dashboards (Prometheus/Grafana) to track latency, throughput, error rates, and cost. Prompt Engineering & UX Evaluation Craft, test, and iterate prompts for chatbots, summarization, and content extraction within the product UI. Define and track evaluation metrics (ROUGE, BLEU, human feedback) to continuously improve the product’s AI outputs. Must-Have Skills ML/AI Experience: 3–4 years in machine learning and generative AI, including 18 months on LLM- based products. Programming & Frameworks: Python, PyTorch (or TensorFlow), Hugging Face Transformers. RAG & Embeddings: Hands-on with FAISS, Pinecone, or ChromaDB and semantic chunking. Fine-Tuning & Quantization: Experience with LoRA/QLoRA, 4-bit/8-bit quantization, and model context protocol (MCP). Prompt & Persona Engineering: Deep expertise in prompt-tuning and persona specification for product use cases. Deployment & Orchestration: Docker, Kubernetes fundamentals, CI/CD pipelines, and GPU setup. Nice-to-Have Multi-modal AI combining text, images, or tabular data. Agentic AI systems with reasoning and planning loops. Knowledge-graph integration for enhanced retrieval. Cloud AI services (AWS SageMaker, GCP Vertex AI, or Azure Machine Learning)
Posted 2 months ago
10.0 - 15.0 years
25 - 40 Lacs
Bengaluru
Work from Office
Job Description About Oracle APAC ISV Business Oracle APAC ISV team is one of the fastest-growing and high-performing business units in APAC. We are a prime team that operates to serve a broad range of customers across the APAC region. ISVs are at the forefront of today's fastest-growing industries. Much of this growth stems from enterprises shifting toward adopting cloud-native ISV SaaS solutions. This transformation drives ISVs to evolve from traditional software vendors to SaaS service providers. Industry analysts predict exponential growth in the ISV market over the coming years, making it a key growth pillar for every hyperscaler. Our Cloud engineering team works on pitch-to-production scenarios of bringing ISVs solutions on the Oracle cloud (#oci) with an aim to provide a cloud platform for running their business which is better performant, more flexible, more secure, compliant to open-source technologies and offers multiple innovation options yet being most cost effective. The team walks along the path with our customers and are being regarded as a trusted techno-business advisors by them. Required Skills/Experience Your versatility and hands-on expertise will be your greatest asset as you deliver on time bound implementation work items and empower our customers to harness the full power of OCI. We also look for: Bachelor's degree in Computer Science, Information Technology, or a related field. Relevant certifications in AI Services on OCI and/or other cloud platforms (AWS, Azure, Google Cloud) 8+ years of professional work experience Proven experience with end-to-end AI solution implementation, from data integration to model deployment and optimization. Experience in design, build, and deployment of end-to-end AI solutions with a focus on LLMs and RAG workflows. Proficiency in frameworks such as TensorFlow, PyTorch, scikit-learn, Keras and programming languages such as Python, R, or SQL.Experience with data wrangling, data pipelines, and data integration tools. Hands-on experience with LLM frameworks and plugins, such as LangChain, LlamaIndex, VectorStores and Retrievers, LLM Cache, LLMOps (MLFlow), LMQL, Guidance, etc. Knowledge of containerization technologies such as Docker and orchestration tools like Kubernetes to scale AI models. Expertise in analytics platforms like Power BI, Tableau, or other business intelligence tools. Experience working with cloud platforms, particularly for AI and analytics workloads. Familiarity with cloud-based AI services like OCI AI, AWS SageMaker etc Experience with building and optimizing data pipelines for large-scale AI/ML applications using tools like Apache Kafka, Apache Spark, Apache Airflow, or similar. Excellent communication skills, with the ability to clearly explain complex AI and analytics concepts to non-technical stakeholders. Proven ability to work with diverse teams and manage client expectations Solid experience managing multiple implementation projects simultaneously while maintaining high-quality standards. Ability to develop and manage project timelines, resources, and budgets. Career Level - IC4 Responsibilities What Youll Do As a solution specialist, you will work closely with our cloud architects and key stakeholders of ISVs to propagate awareness and drive implementation of OCI native as well as open-source cloud-native technologies by ISV customers. Design, implement, and optimize AI and analytics solutions using OCI AI & Analytics Services that enable advanced analytics and AI use cases. Assist clients to architect & deploy AI systems that integrate seamlessly with existing client infrastructure, ensuring scalability, performance, and security. Support the deployment of machine learning models, including model training, testing, and fine-tuning. Ensure scalability, robustness, and performance of AI models in production environments. Design, build, and deploy end-to-end AI solutions with a focus on LLMs and Agentic AI workflows (including Proactive, Reactive, RAG etc.). Help customer migrate from other Cloud vendors AI platform or bring their own AI/ML models leveraging OCI AI services and Data Science platform. Design, propose and implement solution on OCI that helps customers move seamlessly when adopting OCI for their AI requirements Provides direction and specialist knowledge to clients in developing AI chatbots using ODA (Oracle digital Assistance), OIC (Oracle integration cloud) and OCI GenAI services. Configure, integrate, and customize analytics platforms and dashboards on OCI. Implement data pipelines and ensure seamless integration with existing IT infrastructure. Drive discussions on OCI GenAI and AI Platform across the region and accelerate implementation of OCI AI services into Production
Posted 2 months ago
6.0 - 9.0 years
27 - 42 Lacs
Chennai
Work from Office
Role : MLOps Engineer Location - Chennai - CKC Mode of Interview - In Person Data - 7th June 2025 (Saturday) Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough