Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 12.0 years
20 - 27 Lacs
Hyderabad
Hybrid
7+ years of professional backend web development experience with Python. Experience of AI and RAG Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow.
Posted 4 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
The role of Data Scientist - Clinical Data Extraction & AI Integration in our healthcare technology team requires an experienced individual with 3-6 years of experience. As a Data Scientist in this role, you will be primarily focused on medical document processing and data extraction systems. You will have the opportunity to work with advanced AI technologies to create solutions that enhance the extraction of crucial information from clinical documents, thereby improving healthcare data workflows and patient care outcomes. Your key responsibilities will include designing and implementing statistical models for medical data quality assessment, developing predictive algorithms for encounter classification, and validation. You will also be responsible for building machine learning pipelines for document pattern recognition, creating data-driven insights from clinical document structures, and implementing feature engineering for medical terminology extraction. Furthermore, you will apply natural language processing (NLP) techniques to clinical text, develop statistical validation frameworks for extracted medical data, and build anomaly detection systems for medical document processing. Additionally, you will create predictive models for discharge date estimation, encounter duration, and implement clustering algorithms for provider and encounter classification. In terms of AI & LLM Integration, you will be expected to integrate and optimize Large Language Models via AWS Bedrock and API services, design and refine AI prompts for clinical content extraction with high accuracy, and implement fallback logic and error handling for AI-powered extraction systems. You will also develop pattern matching algorithms for medical terminology and create validation layers for AI-extracted medical information. Having expertise in the healthcare domain is crucial for this role. You will work closely with medical document structures, implement healthcare-specific validation rules, handle medical terminology extraction, and conduct clinical context analysis. Ensuring HIPAA compliance and adhering to data security best practices will also be part of your responsibilities. Proficiency in programming languages such as Python 3.8+, R, SQL, and JSON, along with familiarity with data science tools like pandas, numpy, scipy, scikit-learn, spaCy, and NLTK is required. Experience with ML Frameworks including TensorFlow, PyTorch, transformers, huggingface, and visualization tools like matplotlib, seaborn, plotly, Tableau, and PowerBI is desirable. Knowledge of AI Platforms such as AWS Bedrock, Anthropic Claude, OpenAI APIs, and experience with cloud services like AWS (SageMaker, S3, Lambda, Bedrock) will be advantageous. Familiarity with research tools like Jupyter notebooks, Git, Docker, and MLflow is also beneficial for this role.,
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
You will be working with HCL Software, a product development division of HCL Tech, to deliver software solutions that cater to the transformative needs of clients worldwide. The software developed by HCL Software spans across AI, Automation, Data & Analytics, Security, and Cloud domains, receiving accolades for its innovation and quality. Your primary focus will be on the Unica+ Marketing Platform, a product that empowers clients to execute precise and high-performance marketing campaigns across various channels such as social media, AdTech platforms, mobile applications, and websites. This platform, driven by data and AI, enables clients to create hyper-personalized offers and messages for customer acquisition, product awareness, and retention. As a Senior & Lead Python Developer specializing in Data Science and AI/ML, you are expected to leverage your 8+ years of experience in the field to deliver AI-driven marketing campaigns effectively. Your responsibilities will include Python programming, statistical analysis and modeling, data cleaning and preprocessing, SQL and database management, exploratory data analysis, machine learning algorithms, deep learning frameworks, model evaluation and optimization, and deployment of machine learning models. To excel in this role, you must possess a minimum of 8-12 years of Python development experience, with at least 4 years dedicated to data science and machine learning. Additionally, familiarity with Customer Data Platforms (CDP) like Treasure Data, Epsilon, Tealium, Adobe, Salesforce, and AWS SageMaker will be advantageous. Proficiency in integration tools and frameworks such as Postman, Swagger, and API Gateways is desired, along with expertise in REST, JSON, XML, and SOAP. A degree in Computer Science or IT is a prerequisite for this position. Excellent communication and interpersonal skills are essential, as you will be collaborating within an agile team environment. Your ability to work effectively with others and apply agile methodologies will be crucial for success. The role may require approximately 30% travel, and the preferred location is Pune, India. If you meet the qualifications and possess the necessary skills, we invite you to consider joining our dynamic team at HCL Software to contribute to cutting-edge software solutions and drive innovation in the field of data science and AI/ML.,
Posted 1 month ago
18.0 - 22.0 years
0 Lacs
haryana
On-site
At EY, you'll have the opportunity to shape a career that aligns with your unique strengths and aspirations, supported by a global network, inclusive environment, and cutting-edge technology. Your voice and perspective are valued as we strive to enhance EY's capabilities and create a more inclusive working world. Join the EY Parthenon team as the Artificial Intelligence (AI) and Generative AI (GenAI) Leader. This dynamic team focuses on delivering innovative client solutions across various industries, leveraging digital and AI technologies to drive transformation and growth. As the Executive Director of AI & GenAI at EYP, your role involves spearheading the integration of advanced AI solutions to address complex client challenges. Your responsibilities include collaborating with regional teams to identify AI opportunities, design tailored proposals, and lead client workshops to develop AI strategies aligned with business outcomes. Key responsibilities also include architecting end-to-end AI solutions, driving cross-sector innovation, ensuring ethical AI practices, and contributing to AI trends and thought leadership initiatives. To excel in this role, you should possess technical expertise in AI/GenAI lifecycle, proficiency in Python and AI frameworks, consulting acumen, and strong leadership skills. Qualifications for this position include significant experience in AI/data science projects, familiarity with Azure Cloud Framework, and expertise in statistical techniques and machine learning algorithms. Preferred qualifications include a PhD/MS/MTech/BTech in Computer Science or related field, research experience in AI applications, and strategic thinking abilities. Join us to lead AI innovation for Fortune 500 clients, collaborate with multidisciplinary experts, and accelerate your career in a culture of entrepreneurship and continuous learning. EY Global Delivery Services (GDS) offers a diverse and inclusive environment where you can collaborate with global teams and work on impactful projects across various business disciplines. You'll have access to continuous learning opportunities, transformative leadership resources, and a supportive culture that values individual contributions and fosters growth. EY is committed to building a better working world by creating long-term value for clients, promoting diversity and trust, and addressing complex global challenges through innovative solutions. Join us to be part of a team that asks better questions to find new answers and make a positive impact on the world.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an Agentic Thinker and visionary Generative AI Subject Matter Expert (SME) at UBS in Hyderabad, you will lead the design, development, and deployment of cutting-edge generative AI solutions to solve complex business challenges. UBS emphasizes innovation and agility to enhance responsiveness, adaptability, and overall innovation in the workplace. In this role, you will leverage large language models (LLMs), diffusion models, and other generative architectures to drive innovation and act as a technical authority for generative AI. Your responsibilities will include architecting, training, and fine-tuning generative AI models for various applications such as content generation, conversational AI, code automation, and data synthesis. Additionally, you will lead research and development initiatives to enhance model performance, efficiency, and alignment with business objectives. You will collaborate with product, engineering, and business teams to integrate generative AI into workflows and translate business requirements into technical specifications for generative AI use cases. Moreover, you will optimize generative models for latency, cost, and scalability, deploy models on cloud platforms and edge devices, and implement safeguards to mitigate risks such as bias and misinformation while ensuring compliance with data privacy regulations and ethical AI guidelines. As part of the Mainframe & Midrange Stream team in India, you will work on Mainframe and Midrange Gen AI Journey, aligning with UBS AI elements and the Emerging Technology and Modernization Team. Your expertise should include a master's or PhD in Computer Science, Machine Learning, or a related field, along with at least 5 years of experience in AI/ML development with a focus on generative AI. You should have a proven track record of deploying generative AI solutions in production environments, deep expertise in generative architectures such as GPT and Transformers, proficiency in Python, PyTorch/TensorFlow, and cloud platforms, as well as experience with NLP tools and vector databases. Preferred skills include knowledge of multimodal AI, reinforcement learning, and familiarity with AI ethics frameworks. UBS, as the world's largest and truly global wealth manager, operates through four business divisions and has a presence in over 50 countries. If you are passionate about driving innovation, solving complex business challenges with generative AI, and being part of a diverse and inclusive culture that values collaboration and empowerment, UBS is the place for you. Join us to grow, learn, and make a difference in a supportive and inclusive environment.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Senior Generative AI Engineer, your primary role will involve conducting original research on generative AI models. You will focus on exploring model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. It is essential to maintain a strong publication record in esteemed conferences and journals, demonstrating your valuable contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). In addition, you will be responsible for designing and experimenting with multimodal generative models that incorporate various data types such as text, images, and other modalities to enhance AI capabilities. Your expertise will be crucial in developing autonomous AI systems that exhibit agentic behavior, enabling them to make independent decisions and adapt to dynamic environments. Leading the design, development, and implementation of generative AI models and systems will be a key aspect of your role. This involves selecting suitable models, training them on extensive datasets, fine-tuning hyperparameters, and optimizing overall performance. It is imperative to have a deep understanding of the problem domain to ensure effective model development and implementation. Furthermore, you will be tasked with optimizing generative AI algorithms to enhance their efficiency, scalability, and computational performance. Techniques such as parallelization, distributed computing, and hardware acceleration will be utilized to maximize the capabilities of modern computing architectures. Managing large datasets through data preprocessing and feature engineering to extract critical information for generative AI models will also be a crucial aspect of your responsibilities. Your role will also involve evaluating the performance of generative AI models using relevant metrics and validation techniques. By conducting experiments, analyzing results, and iteratively refining models, you will work towards achieving desired performance benchmarks. Providing technical leadership and mentorship to junior team members, guiding their development in generative AI, will also be part of your responsibilities. Documenting research findings, model architectures, methodologies, and experimental results thoroughly is essential. You will prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Additionally, staying updated on the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities is crucial to foster a culture of learning and innovation within the team. Mandatory technical skills for this role include strong programming abilities in Python and familiarity with frameworks like PyTorch or TensorFlow. In-depth knowledge of Deep Learning concepts such as CNN, RNN, LSTM, Transformers LLMs (BERT, GEPT, etc.), and NLP algorithms is required. Experience with frameworks like Langgraph, CrewAI, or Autogen for developing, deploying, and evaluating AI agents is also essential. Preferred technical skills include expertise in cloud computing, particularly with Google/AWS/Azure Cloud Platform, and understanding Data Analytics Services offered by these platforms. Hands-on experience with ML platforms like GCP: Vertex AI, Azure: AI Foundry, or AWS SageMaker is desirable. Strong communication skills, the ability to work independently with minimal supervision, and a proactive approach to escalate when necessary are also key attributes for this role. If you have a Master's or PhD degree in Computer Science and 6 to 8 years of experience with a strong record of publications in top-tier conferences and journals, this role could be a great fit for you. Preference will be given to research scholars from esteemed institutions like IITs, NITs, and IIITs.,
Posted 1 month ago
4.0 - 9.0 years
9 - 30 Lacs
Bengaluru
Work from Office
- Proficiency in LLM systems, prompt fine-tuning - experience with infrastructure management, model deployment, and optimization. - Understanding of cloud architecture, performance and scalability. - Experience with machine learning frameworks Health insurance Provident fund
Posted 1 month ago
8.0 - 12.0 years
30 - 40 Lacs
Bengaluru
Work from Office
Work Location: Bangalore (in-office) We're seeking hands-on Lead Software Engineers who can: Build production-ready AI solutions on AWS Guide distributed teams while continuously scouting and prototyping emerging AI technologies You'll join our Product Engineering group, owning end-to-end delivery of next-generation Amorphic AI solutions while mentoring engineers across global teams. What is Expected: Technical Leadership Lead the integration of cutting-edge AI/LLMs into Amorphic AI Solutions, ensuring seamless interoperability and optimal performance Design and architect complex software systems with focus on scalability, maintainability, and performance Architect production-grade RAG pipelines and multi-agent orchestration on AWS (Lambda, ECS/Fargate, Bedrock, SageMaker, DynamoDB, S3, EventBridge, Step Functions) Drive the design and implementation of scalable AI pipelines Development & Innovation Design, develop, test, and maintain scalable backend applications using Python and AWS services Stay current with AI advancements through hands-on experimentation with emerging frameworks (LangChain, Hugging Face Transformers, CrewAI) via prototypes and side projects Optimize AI solution performance focusing on cost-effectiveness, latency, and resource utilization Develop strategies for monitoring, maintaining, and improving deployed AI models in production Team Leadership Lead 5-10 engineers through design reviews, pair-programming, and PR feedback Conduct code reviews and design discussions to ensure adherence to best practices Collaborate with cross-functional teams globally to identify requirements and implement solutions Create and maintain comprehensive documentation for architecture, design decisions, and coding practices Preferred Candidate Profile BE / B.Tech in Computer Science or related field 8+ years of experience in software development Solid understanding of large language models (LLMs), including experience with prompt engineering, fine-tuning, or integrating LLM APIs (e.g., from OpenAI, Anthropic, or AWS Bedrock) Hands-on experience building AI solutions using latest tools and frameworks (e.g., langchain, crewAI), demonstrated through side projects, open-source contributions, or personal prototypes Proven leadership experience in managing and mentoring high-performing teams of software and application developers Exceptional proficiency in Python programming language Solid understanding of AWS ecosystem including Lambda functions, S3 buckets, EMR clusters, DynamoDB tables etc. Proven experience in a leadership role, leading software development teams in the delivery of complex projects Deep understanding of software architecture and design principles, with a focus on building scalable and maintainable systems Experience with distributed systems, microservices architecture, and cloud-based solutions Strong knowledge of software development best practices, including code reviews, testing, and CI/CD pipelines Experience working with AWS services and developing Cloud Native Applications using REST APIs is must have Experience working in an agile delivery environment, especially product engineering teams How We'll Take Care Of You: We believe in supporting our team members both professionally and personally. Here's a look at the comprehensive benefits and perks we offer: Financial Well-being & Security Competitive Compensation : Enjoy competitive salaries and bonuses that reward your hard work and dedication Robust Insurance Coverage : Benefit from health, life, and disability insurance to ensure you and your family are protected Provident Fund Eligibility : Secure your future with eligibility for the provident fund Work-Life Balance & Flexibility Flexible Working Hours : We offer flexible working hours to help you manage your personal and professional commitments Generous Paid Time Off : Take advantage of unlimited Paid Time Off (PTO), with a mandatory minimum of 1 week per year to ensure you recharge Comprehensive Leave Policies : We provide paid vacation days, sick leave, and holidays, plus supportive parental leave (maternity, paternity, and adoption) and bereavement leave when you need it most Professional Growth & Development Learning & Development : Elevate your skills with access to extensive certification and training programs Cutting-Edge Technologies : You'll work at the forefront of innovation with cutting-edge technologies, constantly igniting your passion for continuous learning and growth Culture & Community Recognition & Rewards : Your contributions won't go unnoticed with our recognition and reward programs Engaging Activities : Connect with your colleagues through company-sponsored events, outings, team-building activities, and retreats .
Posted 1 month ago
10.0 - 20.0 years
30 - 45 Lacs
Pune
Hybrid
Role Senior Python Developer (Data Science, AI/ML) HCL Software (hcl-software.com) delivers software that fulfils the transformative needs of clients around the world. We build award winning software across AI, Automation, Data & Analytics, Security and Cloud. The HCL Unica+ Marketing Platform enables our customers to deliver precision and high performance Marketing campaigns across multiple channels like Social Media, AdTech Platforms, Mobile Applications, Websites, etc. The Unica+ Marketing Platform is a Data and AI first platform that enables our clients to deliver hyper-personalized offers and messages for customer acquisition, product awareness and retention. We are seeking a Senior Python Developer with strong Data Science and Machine Learning skills and experience to deliver AI driven Marketing Campaigns. Responsibilities Python Programming & Libraries: Proficient in Python with extensive experience using Pandas for data manipulation, NumPy for numerical operations, and Matplotlib/Seaborn for data visualization. Statistical Analysis & Modelling: Strong understanding of statistical concepts, including descriptive statistics, inferential statistics, hypothesis testing, regression analysis, and time series analysis. Data Cleaning & Preprocessing: Expertise in handling messy real-world data, including dealing with missing values, outliers, data normalization/standardization, feature engineering, and data transformation. SQL & Database Management: Ability to query and manage data efficiently from relational databases using SQL, and ideally some familiarity with NoSQL databases. Exploratory Data Analysis (EDA): Skill in visually and numerically exploring datasets to understand their characteristics, identify patterns, anomalies, and relationships. Machine Learning Algorithms: In-depth knowledge and practical experience with a wide range of ML algorithms such as linear models, tree-based models (Random Forests, Gradient Boosting), SVMs, K-means, and dimensionality reduction techniques (PCA). Deep Learning Frameworks: Proficiency with at least one major deep learning framework like TensorFlow or PyTorch. This includes understanding neural network architectures (CNNs, RNNs, Transformers) and their application to various problems. Model Evaluation & Optimization: Ability to select appropriate evaluation metrics (e.g., precision, recall, F1-score, AUC-ROC, RMSE) for different problem types, diagnose model performance issues (bias-variance trade-off), and apply optimization techniques. Deployment & MLOps Concepts: Understanding of how to deploy machine learning models into production environments, including concepts of API creation, containerization (Docker), version control for models, and monitoring. Qualifications & Skills At least 8-10 years. of Python Development Experience with at least 4 years in data science and machine learning Experience with Customer Data Platforms (CDP) like TreasureData, Epsilon, Tealium, Adobe, Salesforce is advantageous. Experience with AWS SageMaker is advantegous Experience with LangChain, RAG for Generative AI is advantageous. Expertise in Integration tools and frameworks like Postman, Swagger, API Gateways Knowledge of REST, JSON, XML, SOAP is a must Ability to work well within an agile team environment and applying the related working methods. Excellent communication & interpersonal skills A 4-year degree in Computer Science or IT is a must. Travel: 30% +/- travel required Location: India (Pune preferred) Compensation: Base salary, plus bonus
Posted 1 month ago
3.0 - 5.0 years
9 - 13 Lacs
Jaipur
Work from Office
Job Summary Were seeking a hands-on GenAI & Computer Vision Engineer with 3-5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model developmentfrom research and fine-tuning through deployment, monitoring, and iteration. In this role, youll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challengesaugmentation, domain adaptation, semi-supervised learningand mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally: Bachelors or Masters in Computer Science, Electrical Engineering, AI/ML, or a related field 3-5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges Youll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows
Posted 1 month ago
6.0 - 11.0 years
9 - 19 Lacs
Noida
Work from Office
We are looking for a skilled Machine Learning Engineer with strong expertise in Natural Language Processing (NLP) and AWS cloud services to design, develop, and deploy scalable ML models and pipelines. You will play a key role in building innovative NLP solutions for classification, forecasting, and recommendation systems, leveraging cutting-edge technologies to drive data-driven decision-making in the US healthcare domain. Key Responsibilities: Design and deploy scalable machine learning models focused on NLP tasks, classification, forecasting, and recommender systems. Build robust, end-to-end ML pipelines encompassing data ingestion, feature engineering, model training, validation, and production deployment. Apply advanced NLP techniques including sentiment analysis, named entity recognition (NER), embeddings, and document parsing to extract actionable insights from healthcare data. Utilize AWS services such as SageMaker, Lambda, Comprehend, and Bedrock for model training, deployment, monitoring, and optimization. Collaborate effectively with cross-functional teams including data scientists, software engineers, and product managers to integrate ML solutions into existing products and workflows. Implement MLOps best practices for model versioning, automated evaluation, CI/CD pipelines, and continuous improvement of deployed models. Leverage Python and ML/NLP libraries including scikit-learn, PyTorch, Hugging Face Transformers, and spaCy for daily development tasks. Research and explore advanced NLP/ML techniques such as Retrieval-Augmented Generation (RAG) pipelines, foundation model fine-tuning, and vector search methods for next-generation solutions. Required Qualifications: Bachelors or Masters degree in Computer Science, Engineering, or a related technical field. 6+ years of professional experience in machine learning, with a strong focus on NLP and AWS cloud services. Hands-on experience in designing and deploying production-grade ML models and pipelines. Strong programming skills in Python and familiarity with ML/NLP frameworks like PyTorch, Hugging Face, spaCy, scikit-learn. Proven experience with AWS ML ecosystem: SageMaker, Lambda, Comprehend, Bedrock, and related services. Solid understanding of MLOps principles including version control, model monitoring, and automated deployment. Experience working in the US healthcare domain is a plus. Excellent problem-solving skills and ability to work collaboratively in an agile environment. Preferred Skills: Familiarity with advanced NLP techniques such as RAG pipelines and foundation model tuning. Knowledge of vector databases and semantic search technologies. Experience with containerization (Docker, Kubernetes) and cloud infrastructure automation. Strong communication skills with the ability to translate complex technical concepts to non-technical stakeholders.
Posted 1 month ago
12.0 - 16.0 years
45 - 55 Lacs
Pune
Hybrid
So, what’s the role all about? We are looking for an experienced Architect with a strong background in AI/ML model integration , cloud-native development using AWS Bedrock , and proficiency in .NET and Python . The ideal candidate will play a key role in designing and architecting next-generation AI-powered applications and solutions that deliver business value at scale. How will you make an impact? Design and implement scalable AI solutions leveraging AWS Bedrock , including integration of foundation models from Amazon and third-party providers. Architect and lead the development of cloud-native applications using .NET Core and Python . Collaborate with cross-functional teams including Data Science, Product, and DevOps to define technical solutions. Evaluate and fine-tune AI models (e.g., text generation, summarization, classification) and optimize inference pipelines. Define best practices for model deployment, scalability, security, and monitoring in production. Drive innovation and rapid prototyping of generative AI and RAG (Retrieval-Augmented Generation) use cases. Provide architectural oversight and mentorship to engineering teams, ensuring delivery of robust, high-performance solutions. Have you got what it takes? Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 10+ years of experience in software architecture and development. Proven expertise in AWS services , specifically AWS Bedrock , Lambda, SageMaker, and related AI/ML offerings. Strong programming skills in Python and .NET Core (C#) . Experience with AI/ML models including LLMs, embeddings, and prompt engineering. Familiarity with Vector Databases and semantic search frameworks (e.g., Pinecone, FAISS, OpenSearch). Deep understanding of RESTful APIs, microservices, and cloud architecture patterns. Strong communication and leadership skills; ability to translate business needs into technical solutions. You will have an advantage if you also have: Experience with LangChain , RAG architectures , or custom Bedrock Agents . Exposure to frontend technologies (React, Angular) is a plus. Experience working in Agile/Scrum teams and DevOps environments. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 8056 Reporting into: Tech Manager Role Type: Individual contributor
Posted 1 month ago
2.0 - 5.0 years
0 - 0 Lacs
Nagpur
Remote
Key Responsibilities: Provision and manage GPU-based EC2 instances for training and inference workloads. Configure and maintain EBS volumes and Amazon S3 buckets (versioning, lifecycle policies, multipart uploads) to handle large video and image datasets . Build, containerize, and deploy ML workloads using Docker and push images to ECR . Manage container deployment using Lambda , ECS , or AWS Batch for video inference jobs. Monitor and optimize cloud infrastructure using CloudWatch, Auto Scaling Groups , and Spot Instances to ensure cost efficiency. Set up and enforce IAM roles and permissions for secure access control across services. Collaborate with the AI/ML, annotation, and backend teams to streamline cloud-to-model pipelines. Automate cloud workflows and deployment pipelines using GitHub Actions , Jenkins , or similar CI/CD tools. Maintain logs, alerts, and system metrics for performance tuning and auditing. Required Skills: Cloud & Infrastructure: AWS Services : EC2 (GPU), S3, EBS, ECR, Lambda, Batch, CloudWatch, IAM Data Management : Large file transfer, S3 Multipart Uploads, storage lifecycle configuration, archive policies (Glacier/IA) Security & Access : IAM Policies, Roles, Access Keys, VPC (preferred) DevOps & Automation: Tools : Docker, GitHub Actions, Jenkins, Terraform (bonus) Scripting : Python, Shell scripting for automation & monitoring CI/CD : Experience in building and managing pipelines for model and API deployments ML/AI Environment Understanding: Familiarity with GPU-based ML workloads Knowledge of model training, inference architecture (batch and real-time) Experience with containerized ML model execution is a plus Preferred Qualifications: 2-5 years of experience in DevOps or Cloud Infrastructure roles AWS Associate/Professional Certification (DevOps/Architect) is a plus Experience in managing data-heavy pipelines , such as drones, surveillance, or video AI systems
Posted 1 month ago
12.0 - 17.0 years
45 - 50 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
We are seeking a highly skilled and visionary Agentic AI Architect to lead the strategic design, development, and scalable implementation of autonomous AI systems within our organization. This role demands an individual with deep expertise in cutting-edge AI architectures, a strong commitment to ethical AI practices, and a proven ability to drive innovation. The ideal candidate will architect intelligent, self-directed decision-making systems that integrate seamlessly with enterprise workflows and propel our operational efficiency forward. Key Responsibilities As an Agentic AI Architect, you will: AI Architecture and System Design: Architect and design robust, scalable, and autonomous AI systems that seamlessly integrate with enterprise workflows, cloud platforms, and advanced LLM frameworks. Define blueprints for APIs, agents, and pipelines to enable dynamic, context-aware AI decision-making. Strategic AI Leadership: Provide technical leadership and strategic direction for AI initiatives focused on agentic systems. Guide cross-functional teams of AI engineers, data scientists, and developers in the adoption and implementation of advanced AI architectures. Framework and Platform Expertise: Evaluate, recommend, and implement leading AI tools and frameworks, with a strong focus on autonomous AI solutions (e.g., multi-agent frameworks, self-optimizing systems, LLM-driven decision engines). Drive the selection and utilization of cloud platforms (AWS SageMaker preferred, Azure ML, Google Cloud Vertex AI) for scalable AI deployments. Customization and Optimization: Design strategies for optimizing autonomous AI models for domain-specific tasks (e.g., real-time analytics, adaptive automation). Define methodologies for fine-tuning LLMs, multi-agent frameworks, and feedback loops to align with overarching business goals and architectural principles. Innovation and Research Integration: Spearhead the integration of R&D initiatives into production architectures, advancing agentic AI capabilities. Evaluate and prototype emerging frameworks (e.g., Autogen, AutoGPT, LangChain), neuro-symbolic architectures, and self-improving AI systems for architectural viability. Documentation and Architectural Blueprinting: Develop comprehensive technical white papers, architectural diagrams, and best practices for autonomous AI system design and deployment. Serve as a thought leader, sharing architectural insights at conferences and contributing to open-source AI communities. System Validation and Resilience: Design and oversee rigorous architectural testing of AI agents, including stress testing, adversarial scenario simulations, and bias mitigation strategies, ensuring alignment with compliance, ethical and performance benchmarks for robust production systems. Stakeholder Collaboration & Advocacy: Collaborate with executives, product teams, and compliance officers to align AI architectural initiatives with strategic objectives. Advocate for AI-driven innovation and architectural best practices across the organization. Qualifications: Technical Expertise: 12+ years of progressive experience in AI/ML, with a strong track record as an AI Architect , ML Architect, or AI Solutions Lead. 7+ years specifically focused on designing and architecting autonomous/agentic AI systems (e.g., multi-agent frameworks, self-optimizing systems, or LLM-driven decision engines). Expertise in Python (mandatory) and familiarity with Node.js for architectural integrations. Extensive hands-on experience with autonomous AI tools and frameworks : LangChain, Autogen, CrewAI, or architecting custom agentic frameworks. Proficiency in cloud platforms for AI architecture : AWS SageMaker (most preferred), Azure ML, or Google Cloud Vertex AI, with a deep understanding of their AI service offerings. Demonstrable experience with MLOps pipelines (e.g., Kubeflow, MLflow) and designing scalable deployment strategies for AI agents in production environments. Leadership & Strategic Acumen: Proven track record of leading the architectural direction of AI/ML teams, managing complex AI projects, and mentoring senior technical staff. Strong understanding and practical application of AI governance frameworks (e.g., EU AI Act, NIST AI RMF) and advanced bias mitigation techniques within AI architectures. Exceptional ability to translate complex technical AI concepts into clear, concise architectural plans and strategies for non-technical stakeholders and executive leadership. Ability to envision and articulate a long-term strategy for AI within the business, aligning AI initiatives with business objectives and market trends. Foster collaboration across various practices, including product management, engineering, and marketing, to ensure cohesive implementation of AI strategies that meet business goals.
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
Visionify is dedicated to leveraging the potential of Computer Vision and AI for various real-world applications. We are currently seeking a highly skilled, motivated, and enthusiastic Senior Computer Vision Engineer to play a crucial role in implementing our strategic plans. As a Senior Computer Vision Engineer at Visionify, you will be tasked with tackling cutting-edge challenges in the realm of Computer Vision by devising innovative algorithms and optimizations. The majority of our projects revolve around practical applications of Computer Vision, necessitating a strong grasp of contemporary model types such as Classification, Object detection, Object Recognition, OCR, LayoutML, and GAN networks. Proficiency in Pytorch is essential for this role, as it serves as our primary programming language. Familiarity with Azure and Azure ML Studio would be advantageous. Candidates applying for this position should remain abreast of the latest advancements and actively contribute to enhancing the Pytorch project's performance and accuracy. Your expertise in PyTorch and its underlying mechanisms will be pivotal in resolving customer challenges and offering valuable insights into product improvements. Experience in optimizing and streamlining models for deployment on edge devices, as well as converting models to NVIDIA TensorRT, will be highly valued. A strong foundation in Python programming is indispensable, given its widespread use in our organization for developing training and inference pipelines. Effective communication and presentation skills are also crucial. The ideal candidate will exhibit a deep passion for artificial intelligence and a commitment to staying updated on industry trends. **Responsibilities:** - Understanding business objectives and devising Computer Vision solutions that align with these goals, including developing training and inference frameworks and leveraging various ML technologies. - Building and optimizing Pytorch models for different runtime environments, including NVIDIA Jetson TensorRT. - Guiding the development team, addressing their queries, and facilitating the timely completion of their tasks. - Creating ML/Computer Vision algorithms to address specific challenges. - Analyzing and visualizing data to identify potential performance-affecting disparities in data distribution, especially when deploying models in real-world scenarios. - Establishing processes for core team operations, such as data acquisition, model training, and prototype development. - Identifying and utilizing open-source datasets for prototype building. - Developing pipelines for data processing, augmentation, training, inference, and active retraining. - Training models, fine-tuning hyperparameters, and devising strategies to address model errors. - Deploying models for production use. **Requirements:** - Bachelor's or Master's degree in Computer Science, Computer Engineering, IT, or a related field. - Minimum of 5 years of relevant experience; candidates with exceptional skills but less experience are encouraged to apply. - Industry experience in Image & Video Processing, including familiarity with OpenCV, GStreamer, TensorFlow, PyTorch, TensorRT, and various model training/inference techniques. - Proficiency in deep learning classification models (e.g., ResNet, Inception, VGG) and object detection models (e.g., MobileNetSSD, Yolo, FastRCNN, MaskRCNN). - Strong command of Pytorch, Torchvision, and the ability to develop training routines and update models effectively. - Familiarity with Colab, Jupyter Notebook, CUDA/GPU, and CNN visualization techniques like CAM and GradCAM. - Expertise in Computer Vision and real-time video processing methods. - Proficient in Python programming and adept at writing reusable code. - Experience with OpenCV, Scikit packages, NVIDIA platform tools (e.g., Deepstream, TensorRT), Python web frameworks (e.g., Flask, Django, FastAPI), and ML platforms (e.g., PyTorch, TensorFlow). - Knowledge of AWS SageMaker, various databases (e.g., Elasticsearch, SQL, NoSQL, Hive), cloud environments (preferably AWS) for software development, GPU-based training infrastructures, Docker, and DevOps and MLOps best practices for ML systems. **Desired Traits:** - Collaborative mindset and ability to thrive in a team environment. - Adaptability to evolving requirements. - Proclivity for innovative problem-solving. - Strong focus on work quality and developing robust code.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Vola Finance is a rapidly expanding fintech company that is transforming the landscape of financial access and management. Our cutting-edge platform empowers individuals to enhance their financial well-being and take charge of their expenditures through a range of innovative tools and solutions. With the support of top-tier investors, we are dedicated to crafting products that have a significant positive impact on the lives of our users. Our founding team comprises enthusiastic leaders with extensive backgrounds in finance and technology. Drawing upon their vast experience from leading global corporations, they are committed to cultivating a culture of creativity, teamwork, and excellence within our organization. As a member of our team, your primary responsibilities will include: - Developing churn prediction models utilizing advanced machine learning algorithms based on user transactional and behavioral data - Constructing regression models to predict users" income and balances using transaction data - Creating customer segmentation and recommendation engines for cross-selling initiatives - Building natural language processing models to gauge customer sentiment - Developing propensity models and conducting lifetime value (LTV) analysis - Establishing modern data pipelines and processing systems using AWS PAAS components like Glue and Sagemaker Studio - Utilizing API tools such as REST, Swagger, and Postman - Deploying models in the AWS environment and managing the production setup - Collaborating effectively with cross-functional teams to collect data and derive insights Essential Technical Skill Set: 1. Prior experience in Fintech product and growth strategy 2. Proficiency in Python 3. Strong grasp of linear regression, logistic regression, and tree-based machine learning algorithms 4. Sound knowledge of statistical analysis and A/B testing 5. Familiarity with AWS services such as Sagemaker, S3, EC2, and Docker 6. Experience with REST API, Swagger, and Postman 7. Proficiency in Excel 8. Competence in SQL 9. Ability to work with visualization tools like Redash or Grafana 10. Familiarity with versioning tools like Bitbucket, Github, etc.,
Posted 1 month ago
4.0 - 8.0 years
0 Lacs
maharashtra
On-site
At PwC, the focus in data and analytics revolves around leveraging data to drive insights and make informed business decisions. By utilizing advanced analytics techniques, our team helps clients optimize operations and achieve strategic goals. As a professional in data analysis at PwC, you will specialize in utilizing advanced analytical techniques to extract insights from large datasets, supporting data-driven decision-making. Your role will involve leveraging skills in data manipulation, visualization, and statistical modeling to assist clients in solving complex business problems. PwC US - Acceleration Center is currently seeking individuals with a strong analytical background to join our Analytics Consulting practice. As a Senior Associate, you will be an essential part of business analytics teams in India, collaborating with clients and consultants in the U.S. You will lead teams for high-end analytics consulting engagements and provide business recommendations to project teams. **Years of Experience:** Candidates should possess 4+ years of hands-on experience. **Must Have:** - Experience in building ML models in cloud environments (At least 1 of the 3: Azure ML, GCPs Vertex AI platform, AWS SageMaker) - Knowledge of predictive/prescriptive analytics, particularly in the usage of Log-Log, Log-Linear, Bayesian Regression techniques, Machine Learning algorithms (Supervised and Unsupervised), deep learning algorithms, and Artificial Neural Networks - Good knowledge of statistics, including statistical tests & distributions - Experience in Data analysis, such as data cleansing, standardization, and data preparation for machine learning use cases - Experience in machine learning frameworks and tools (e.g., scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib) - Advanced level programming in SQL or Python/Pyspark - Expertise with visualization tools like Tableau, PowerBI, AWS QuickSight, etc. **Nice To Have:** - Working knowledge of containerization (e.g., AWS EKS, Kubernetes), Dockers, and data pipeline orchestration (e.g., Airflow) - Good communication and presentation skills **Roles And Responsibilities:** - Develop and execute project & analysis plans under the guidance of the Project Manager - Interact with and advise consultants/clients in the U.S. as a subject matter expert to formalize data sources, acquire datasets, and clarify data & use cases for a strong understanding of data and business problems - Drive and conduct analysis using advanced analytics tools and mentor junior team members - Implement quality control measures to ensure deliverable integrity - Validate analysis outcomes and recommendations with stakeholders, including the client team - Build storylines and deliver presentations to the client team and/or PwC project leadership team - Contribute to knowledge sharing and firm building activities **Professional And Educational Background:** - Any graduate / BE / B.Tech / MCA / M.Sc / M.E / M.Tech / Masters Degree / MBA,
Posted 1 month ago
3.0 - 6.0 years
4 - 7 Lacs
Ahmedabad, Vadodara
Work from Office
AI/ML Engineer (2-3 positions) Job Summary: We are seeking a highly skilled and motivated AI/ML Engineer with a specialization in Computer Vision & Un-Supervised Learning to join our growing team. You will be responsible for building, optimizing, and deploying advanced video analytics solutions for smart surveillance applications, including real-time detection, facial recognition, and activity analysis. This role combines the core competencies of AI/ML modelling with the practical skills required to deploy and scale models in real-world production environments, both in the cloud and on edge devices. Key Responsibilities: AI/ML Development & Computer Vision Design, train, and evaluate models for: o Face detection and recognition o Object/person detection and tracking o Intrusion and anomaly detection o Human activity or pose recognition/estimation Work with models such as YOLOv8, DeepSORT, RetinaNet, Faster-RCNN, and InsightFace. Perform data preprocessing, augmentation, and annotation using tools like LabelImg, CVAT, or custom pipelines. Surveillance System Integration Integrate computer vision models with live CCTV/RTSP streams for real-time analytics. Develop components for motion detection, zone-based event alerts, person re-identification, and multi-camera coordination. Optimize solutions for low-latency inference on edge devices (Jetson Nano, Xavier, Intel Movidius, Coral TPU). Model Optimization & Deployment Convert and optimize trained models using ONNX, TensorRT, or OpenVINO for real-time inference. Build and deploy APIs using FastAPI, Flask, or TorchServe. Package applications using Docker and orchestrate deployments with Kubernetes. Automate model deployment workflows using CI/CD pipelines (GitHub Actions, Jenkins). Monitor model performance in production using Prometheus, Grafana, and log management tools. Manage model versioning, rollback strategies, and experiment tracking using MLflow or DVC. As an AI/ML Engineer, you should be well-versed of AI agent development and finetuning experience Collaboration & Documentation Work closely with backend developers, hardware engineers, and DevOps teams. Maintain clear documentation of ML pipelines, training results, and deployment practices. Stay current with emerging research and innovations in AI vision and MLOps. Required Qualifications: Bachelors or masters degree in computer science, Artificial Intelligence, Data Science, or a related field. 3-6 years of experience in AI/ML, with a strong portfolio in computer vision, Machine Learning. Hands-on experience with: o Deep learning frameworks: PyTorch, TensorFlow o Image/video processing: OpenCV, NumPy o Detection and tracking frameworks: YOLOv8, DeepSORT, RetinaNet. Solid understanding of deep learning architectures (CNNs, Transformers, Siamese Networks). Proven experience with real-time model deployment on cloud or edge environments. Strong Python programming skills and familiarity with Git, REST APIs, and DevOps tools. Preferred Qualifications: Experience with multi-camera synchronization and NVR/DVR systems. Familiarity with ONVIF protocols and camera SDKs. Experience deploying AI models on Jetson Nano/Xavier, Intel NCS2, or Coral Edge TPU. Background in face recognition systems (e.g., InsightFace, FaceNet, Dlib). Understanding of security protocols and compliance in surveillance systems. Tools & Technologies: Category Tools & Frameworks Languages & AI Python, PyTorch, TensorFlow, OpenCV, NumPy, Scikit-learn Model Serving FastAPI, Flask, TorchServe, TensorFlow Serving, REST/gRPC APIs Model Optimization ONNX, TensorRT, OpenVINO, Pruning, Quantization Deployment Docker, Kubernetes, Gunicorn, MLflow, DVC CI/CD & DevOps GitHub Actions, Jenkins, GitLab CI Cloud & Edge AWS SageMaker, Azure ML, GCP AI Platform, Jetson, Movidius, Coral TPU Monitoring Prometheus, Grafana, ELK Stack, Sentry Annotation Tools LabelImg, CVAT, Supervisely
Posted 1 month ago
15.0 - 19.0 years
0 Lacs
hyderabad, telangana
On-site
House of Shipping is seeking a high-caliber Data Science Lead to join their team in Hyderabad. With a background of 15-18 years in data science, including at least 5 years in leadership roles, the ideal candidate will have a proven track record in building and scaling data science teams in logistics, e-commerce, or manufacturing. Strong expertise in statistical learning, ML architecture, productionizing models, and impact tracking is essential for this role. As the Data Science Lead, you will be responsible for leading enterprise-scale data science initiatives in supply chain optimization, forecasting, network analytics, and predictive maintenance. This position requires a blend of technical leadership and strategic alignment with various business units to deliver measurable business impact. Key responsibilities include defining and driving the data science roadmap across forecasting, route optimization, warehouse simulation, inventory management, and fraud detection. You will work closely with engineering teams to architect end-to-end pipelines, from data ingestion to API deployment. Proficiency in Python and MLOps tools like Scikit-Learn, XGBoost, PyTorch, MLflow, Vertex AI, or AWS SageMaker is crucial for success in this role. Collaboration with operations, product, and technology teams to prioritize AI use cases and define business metrics will be a key aspect of the job. Additionally, you will be responsible for managing experimentation frameworks, mentoring team members, ensuring model validation, and contributing to organizational data maturity. The ideal candidate will possess a Bachelor's, Master's, or Ph.D. degree in Computer Science, Mathematics, Statistics, or Operations Research. Certifications in Cloud ML stacks, MLOps, or Applied AI are preferred. To excel in this role, you should have a strategic vision in AI applications across the supply chain, strong team mentorship skills, expertise in statistical and ML frameworks, and experience in MLOps pipeline management. Excellent business alignment and executive communication skills are also essential for this position. If you are a data science leader looking to make a significant impact in the logistics industry, we encourage you to apply for this exciting opportunity with House of Shipping.,
Posted 1 month ago
4.0 - 7.0 years
9 - 13 Lacs
Tamil Nadu
Work from Office
Introduction to the Role: Are you passionate about building intelligent systems that learn, adapt, and deliver real-world value? Join our high-impact AI & Machine Learning Engineering team and be a key contributor in shaping the next generation of intelligent applications. As an AI/ML Engineer , youll have the unique opportunity to develop, deploy, and scale advanced ML and Generative AI (GenAI) solutions in production environments, leveraging cutting-edge technologies, frameworks, and cloud platforms . In this role, you will collaborate with cross-functional teams including data engineers, product managers, MLOps engineers, and architects to design and implement production-grade AI solutions across domains. If you're looking to work at the intersection of deep learning, GenAI, cloud computing, and MLOps this is the role for you. Accountabilities: Design, develop, train, and deploy production-grade ML and GenAI models across use cases including NLP, computer vision, and structured data modeling. Leverage frameworks such as TensorFlow , Keras , PyTorch , and LangChain to build scalable deep learning and LLM-based solutions. Develop and maintain end-to-end ML pipelines with reusable, modular components for data ingestion, feature engineering, model training, and deployment. Implement and manage models on cloud platforms such as AWS , GCP , or Azure using services like SageMaker , Vertex AI , or Azure ML . Apply MLOps best practices using tools like MLflow , Kubeflow , Weights & Biases , Airflow , DVC , and Prefect to ensure scalable and reliable ML delivery. Incorporate CI/CD pipelines (using Jenkins, GitHub Actions, or similar) to automate testing, packaging, and deployment of ML workloads. Containerize applications using Docker and orchestrate scalable deployments via Kubernetes . Integrate LLMs with APIs and external systems using LangChain, Vector Databases (e.g., FAISS, Pinecone), and prompt engineering best practices. Collaborate closely with data engineers to access, prepare, and transform large-scale structured and unstructured datasets for ML pipelines. Build monitoring and retraining workflows to ensure models remain performant and robust in production. Evaluate and integrate third-party GenAI APIs or foundational models where appropriate to accelerate delivery. Maintain rigorous experiment tracking, hyperparameter tuning, and model versioning. Champion industry standards and evolving practices in ML lifecycle management , cloud-native AI architectures , and responsible AI. Work across global, multi-functional teams, including architects, principal engineers, and domain experts. Essential Skills / Experience: 4-7 years of hands-on experience in developing, training, and deploying ML/DL/GenAI models . Strong programming expertise in Python with proficiency in machine learning , data manipulation , and scripting . Demonstrated experience working with Generative AI models and Large Language Models (LLMs) such as GPT, LLaMA, Claude, or similar. Hands-on experience with deep learning frameworks like TensorFlow , Keras , or PyTorch . Experience in LangChain or similar frameworks for LLM-based app orchestration. Proven ability to implement and scale CI/CD pipelines for ML workflows using tools like Jenkins , GitHub , GitLab , or Bitbucket Pipelines . Familiarity with containerization (Docker) and orchestration tools like Kubernetes . Experience working with cloud platforms (AWS, Azure, GCP) and relevant AI/ML services such as SageMaker , Vertex AI , or Azure ML Studio . Knowledge of MLOps tools such as MLflow , Kubeflow , DVC , Weights & Biases , Airflow , and Prefect . Strong understanding of data engineering concepts , including batch/streaming pipelines, data lakes, and real-time processing (e.g., Kafka ). Solid grasp of statistical modeling , machine learning algorithms , and evaluation metrics. Experience with version control systems (Git) and collaborative development workflows. Ability to translate complex business needs into scalable ML architectures and systems. Desirable Skills / Experience: Working knowledge of vector databases (e.g., FAISS , Pinecone , Weaviate ) and semantic search implementation. Hands-on experience with prompt engineering , fine-tuning LLMs, or using techniques like LoRA , PEFT , RLHF . Familiarity with data governance , privacy , and responsible AI guidelines (bias detection, explainability, etc.). Certifications in AWS, Azure, GCP, or ML/AI specializations. Experience in high-compliance industries like pharma , banking , or healthcare . Familiarity with agile methodologies and working in iterative, sprint-based teams. Work Environment & Collaboration: You will be a key member of an agile, forward-thinking AI/ML team that values curiosity, excellence, and impact. Our hybrid work culture promotes flexibility while encouraging regular in-person collaboration to foster innovation and team synergy. You'll have access to the latest technologies, mentorship, and continuous learning opportunities through hands-on projects and professional development resources.
Posted 1 month ago
4.0 - 9.0 years
0 - 3 Lacs
Bengaluru
Hybrid
Hi Connections, We have an immediate requirement on Gen AI MI Role: Technology Lead MandatorySkills: Aws Sagemaker, LLM, Gen Ai Location: Bengaluru Work Mode: Hybrid Job Description: well-versed Gen AI-ML resource who can work on the AwsSagemaker for LLM Model training and also can perform development using Terraform. Preferred candidate profile
Posted 1 month ago
8.0 - 13.0 years
11 - 16 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
About the Role: Grade Level (for internal use): 12 Lead Agentic AI Developer LocationGurgaon, Hyderabad and Bangalore About The Role : A Lead Agentic AI Developer will drive the design, development, and deployment of autonomous AI systems that enable intelligent, self-directed decision-making. Their day-to-day operations focus on advancing AI capabilities, leading teams, and ensuring ethical, scalable implementations. Responsibilities AI System Design and Development Architect and build autonomous AI systems that integrate with enterprise workflows, cloud platforms, and LLM frameworks. Develop APIs, agents, and pipelines to enable dynamic, context-aware AI decision-making. Team Leadership and Mentorship Lead cross-functional teams of AI engineers, data scientists, and developers. Mentor junior staff in agentic AI principles, reinforcement learning, and ethical AI governance. Customization and Advancement Optimize autonomous AI models for domain-specific tasks (e.g., real-time analytics, adaptive automation). Fine-tune LLMs, multi-agent frameworks, and feedback loops to align with business goals. Ethical AI Governance Monitor AI behavior, audit decision-making processes, and implement safeguards to ensure transparency, fairness, and compliance with regulatory standards. Innovation and Research Spearhead R&D initiatives to advance agentic AI capabilities. Experiment with emerging frameworks (e.g.,Autogen, AutoGPT, LangChain), neuro-symbolic architectures, and self-improving AI systems. Documentation and Thought Leadership Publish technical white papers, case studies, and best practices for autonomous AI. Share insights at conferences and contribute to open-source AI communities. System Validation Oversee rigorous testing of AI agents, including stress testing, adversarial scenario simulations, and bias mitigation. Validate alignment with ethical and performance benchmarks. Stakeholder Leadership Collaborate with executives, product teams, and compliance officers to align AI initiatives with strategic objectives. Advocate for AI-driven innovation across the organization. What Were Looking For : REQUIRED S/QUALIFICATIONS Technical Expertise : 8+ years as a Senior AI Engineer , ML Architect , or AI Solutions Lead , with 5+ years focused on autonomous/agentic AI systems (e.g., multi-agent frameworks, self-optimizing systems, or LLM-driven decision engines). Expertise in Python (mandatory) and familiarity with Node.js . Hands-on experience with autonomous AI tools LangChain, Autogen, CrewAI, or custom agentic frameworks. Proficiency in cloud platforms AWS SageMaker (most preferred), Azure ML, or Google Cloud Vertex AI. Experience with MLOps pipelines (e.g., Kubeflow, MLflow) and scalable deployment of AI agents. Leadership Proven track record of leading AI/ML teams, managing complex projects, and mentoring technical staff. Ethical AI Familiarity with AI governance frameworks (e.g., EU AI Act, NIST AI RMF) and bias mitigation techniques. Communication Exceptional ability to translate technical AI concepts for non-technical stakeholders. Nice to have : Contributions to AI research (published papers, patents) or open-source AI projects (e.g., TensorFlow Agents, AutoGen). Experience with DevOps/MLOps toolsKubeflow, MLflow, Docker, or Terraform. Expertise in NLP, computer vision, or graph-based AI systems. Familiarity with quantum computing or neuromorphic architectures for AI. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ---- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ---- 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)
Posted 1 month ago
5.0 - 8.0 years
10 - 15 Lacs
Bengaluru
Hybrid
Job Title: Technology Lead Experience: 5+ Years Notice Period : Immediate Location: Bangalore (Hybrid) Detailed job description - Skill Set: We are looking for a well-versed Gen AI-ML resource who can work on the AWS Sagemaker for LLM Model training and also can perform development using Terraform. Mandatory Skills AWS Sagemaker, Terraform, LLM Model training
Posted 1 month ago
3.0 - 6.0 years
0 - 3 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Role & responsibilities Implement and manage AIOps platforms for intelligent monitoring, alerting, anomaly detection, and root cause analysis (RCA). Possess end-to-end knowledge of VLLM model hosting and inferencing. Advanced knowledge of public cloud platforms such as AWS and Azure. Build and maintain machine learning pipelines and models for predictive maintenance, anomaly detection, and noise reduction. Experience in production support and real-time issue handling. Design dashboards and visualizations to provide operational insights to stakeholders. Working knowledge of Bedrock, SageMaker, EKS, Lambda, etc. 1 to 2 years of experience with Jenkins and GoCD to make build/deploy pipelines. Hands-on experience with open-source and self-hosted model APIs using SDKs. Drive data-driven decisions by analyzing operational data and generating reports on system health, performance, and availability. Basic knowledge of kserve and rayserve inferencing . Good knowledge of high level scaling using Karpenter , Keda , System based vertical/horizontal scaling. Strong knowledge on linux operating system or linux certified . Previous experience with Helm chart deployments and Terraform template and module creation is highly recommended. Secondary Responsibilities: Proven experience in AIOps and DevOps, with a strong background in cloud technologies (AWS, Azure, Google Cloud). Proficiency in tools such as Kubeflow, Kserve, ONNX, and containerization technologies (Docker, Kubernetes). Experience with enterprise-level infrastructure, including tools like terraform, helm, and On-Prem servers hosting. Previous experience in fintech or AI based tech companies are highly desirable. Demonstrates the ability to manage workloads effectively in a production environment. Possesses excellent communication and collaboration skills, with a strong focus on cross-functional teamwork.
Posted 1 month ago
8.0 - 12.0 years
0 Lacs
karnataka
On-site
As a Site Reliability Engineering (SRE) Technical Leader on the Network Assurance Data Platform (NADP) team at Cisco ThousandEyes, you will be responsible for ensuring the reliability, scalability, and security of the cloud and big data platforms. Your role will involve representing the NADP SRE team, contributing to the technical roadmap, and collaborating with cross-functional teams to design, build, and maintain SaaS systems operating at multi-region scale. Your efforts will be crucial in supporting machine learning (ML) and AI initiatives by ensuring the platform infrastructure is robust, efficient, and aligned with operational excellence. You will be tasked with designing, building, and optimizing cloud and data infrastructure to guarantee high availability, reliability, and scalability of big-data and ML/AI systems. This will involve implementing SRE principles such as monitoring, alerting, error budgets, and fault analysis. Additionally, you will collaborate with various teams to create secure and scalable solutions, troubleshoot technical problems, lead the architectural vision, and shape the technical strategy and roadmap. Your role will also encompass mentoring and guiding teams, fostering a culture of engineering and operational excellence, engaging with customers and stakeholders to understand use cases and feedback, and utilizing your strong programming skills to integrate software and systems engineering. Furthermore, you will develop strategic roadmaps, processes, plans, and infrastructure to efficiently deploy new software components at an enterprise scale while enforcing engineering best practices. To be successful in this role, you should have relevant experience (8-12 yrs) and a bachelor's engineering degree in computer science or its equivalent. You should possess the ability to design and implement scalable solutions, hands-on experience in Cloud (preferably AWS), Infrastructure as Code skills, experience with observability tools, proficiency in programming languages such as Python or Go, and a good understanding of Unix/Linux systems and client-server protocols. Experience in building Cloud, Big data, and/or ML/AI infrastructure is essential, along with a sense of ownership and accountability in architecting software and infrastructure at scale. Additional qualifications that would be advantageous include experience with the Hadoop Ecosystem, certifications in cloud and security domains, and experience in building/managing a cloud-based data platform. Cisco encourages individuals from diverse backgrounds to apply, as the company values perspectives and skills that emerge from employees with varied experiences. Cisco believes in unlocking potential and creating diverse teams that are better equipped to solve problems, innovate, and make a positive impact.,
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City