Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Delta Tech Hub: Delta Air Lines (NYSE: DAL) is the U.S. global airline leader in safety, innovation, reliability and customer experience. Powered by our employees around the world, Delta has for a decade led the airline industry in operational excellence while maintaining our reputation for award-winning customer service. With our mission of connecting the people and cultures of the globe, Delta strives to foster understanding across a diverse world and serve as a force for social good. Delta has fast emerged as a customer-oriented, innovation-led, technology-driven business. The Delta Technology Hub will contribute directly to these objectives. It will sustain our long-term aspirations of delivering niche, IP-intensive, high-value, and innovative solutions. It supports various teams and functions across Delta and is an integral part of our transformation agenda, working seamlessly with a global team to create memorable experiences for customers. Primary Functions: Responsible for the design, development and maintenance of ML / AI models and tools (e.g., forecasting, optimization, performance, etc.) Add explainability to model predictions & outcomes thereby making insights actionable Collaborates with COE team and business owners to take a data-driven approach in identifying key customer pain-points, uncover insights, and develop techniques to address these issues Detailed understanding of data science techniques, calibrating and enhancing existing models, and monitoring model performance Leverage emerging technologies and identify efficient and meaningful ways to deliver meaningful insights to the business Explore industry & academic publications, research papers & adopt the same for CX business use cases Hands-on experience with data modeling, design tools, and business case delivery Ensure alignment with business requirement & present analysis to business users in a digestible way Skills Required: 2 – 4 Overall years of data science experience delivering insights for business projects Bachelor’s degree in data science, statistics, mathematics, computer science or engineering discipline Demonstrated professional experience in statistics & relevant 2 – 3 yrs of ML algorithms on an Enterprise business use case. Proficient in Python and data-focused packages (e.g., Pandas, Numpy) Proficiency in SQL Embraces diverse people, thinking, and styles Consistently makes safety and security, of self and others, the priority What will give you a Competitive edge: Knowledge of Deep Learning Experience in designing & implementing ML/AI models for cloud-based solutions on leading cloud providers such as AWS, Azure, etc. Knowledge of ML lifecycle management platforms like ML Flow, AWS Sagemaker, etc. Ability to code with software engineering best practices of reusability, modularity, object oriented, etc. Exposure to version control (Git) and collaborative coding practices. Self-motivated and take pride in building great experiences for users, whether they are employees or customers. Resourceful in finding the data and tools you need to get the job done Not afraid to ask for help when you need it, or help teammates when they need a boost Intensely curious about finding a solution to the pain-points of our customers along the entire travel experience
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the Role: We are looking for a highly skilled and experienced Machine Learning / AI Engineer to join our team at Zenardy. The ideal candidate needs to have a proven track record of building, deploying, and optimizing machine learning models in real-world applications. You will be responsible for designing scalable ML systems, collaborating with cross-functional teams, and driving innovation through AI-powered solutions. Location: Chennai & Hyderabad Key Responsibilities: Design, develop, and deploy machine learning models to solve complex business problems Work across the full ML lifecycle: data collection, preprocessing, model training, evaluation, deployment, and monitoring Collaborate with data engineers, product managers, and software engineers to integrate ML models into production systems Conduct research and stay up-to-date with the latest ML/AI advancements, applying them where appropriate Optimize models for performance, scalability, and robustness Document methodologies, experiments, and findings clearly for both technical and non-technical audiences Mentor junior ML engineers or data scientists as needed Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or related field (Ph.D. is a plus) Minimum of 5 hands-on ML/AI projects , preferably in production or with real-world datasets Proficiency in Python and ML libraries/frameworks like TensorFlow, PyTorch, Scikit-learn, XGBoost Solid understanding of core ML concepts: supervised/unsupervised learning, neural networks, NLP, computer vision, etc. Experience with model deployment using APIs, containers (Docker), cloud platforms (AWS/GCP/Azure) Strong data manipulation and analysis skills using Pandas, NumPy , and SQL Knowledge of software engineering best practices: version control (Git), CI/CD, unit testing Preferred Skills: Experience with MLOps tools (MLflow, Kubeflow, SageMaker, etc.) Familiarity with big data technologies like Spark, Hadoop, or distributed training frameworks Experience working in Fintech environments would be a plus Strong problem-solving mindset with excellent communication skills Experience in working with vector database. Understanding of RAG vs Fine-tuning vs Prompt Engineering
Posted 1 week ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Key Responsibilities Hands-on Development: Develop and implement machine learning models and algorithms, including supervised, unsupervised, deep learning, and reinforcement learning techniques. Implement Generative AI solutions using technologies like RAG (Retrieval-Augmented Generation), Vector DBs, and frameworks such as LangChain and Hugging Face, Agentic Ai. Utilize popular AI/ML frameworks and libraries such as TensorFlow, PyTorch, and scikit-learn. Design and deploy NLP models and techniques, including text classification, RNNs, CNNs, and Transformer-based models like BERT. Ensure robust end-to-end AI/ML solutions, from data preprocessing and feature engineering to model deployment and monitoring. Technical Proficiency: Demonstrate strong programming skills in languages commonly used for data science and ML, particularly Python. Leverage cloud platforms and services for AI/ML, especially AWS, with knowledge of AWS Sagemaker, Lambda, DynamoDB, S3, and other AWS resources. Mentorship: Mentor and coach a team of data scientists and machine learning engineers, fostering skill development and professional growth. Provide technical guidance and support, helping team members overcome challenges and achieve project goals. Set technical direction and strategy for AI/ML projects, ensuring alignment with business goals and objectives. Facilitate knowledge sharing and collaboration within the team, promoting best practices and continuous learning. Strategic Advisory: Collaborate with cross-functional teams to integrate AI/ML solutions into business processes and products. Provide strategic insights and recommendations to support decision-making processes. Communicate effectively with stakeholders at various levels, including technical and non-technical audiences. Qualifications Bachelor’s degree in a relevant field (e.g., Computer Science) or equivalent combination of education and experience. Typically, 8-10 years of relevant work experience in AI/ML/GenAI 15+ years of overall work experience. With proven ability to manage projects and activities. Extensive experience with generative AI technologies, including RAG, Vector DBs, and frameworks such as LangChain and Hugging Face, Agentic AI Proficiency in machine learning algorithms and techniques, including supervised and unsupervised learning, deep learning, and reinforcement learning. Extensive experience with AI/ML frameworks and libraries such as TensorFlow, PyTorch, and scikit-learn. Strong knowledge of natural language processing (NLP) techniques and models, including Transformer-based models like BERT. Proficient programming skills in Python and experience with cloud platforms like AWS. Experience with AWS Cloud Resources, including AWS Sagemaker, Lambda, DynamoDB, S3, etc., is a plus. Proven experience leading a team of data scientists or machine learning engineers on complex projects. Strong project management skills, with the ability to prioritize tasks, allocate resources, and meet deadlines. Excellent communication skills and the ability to convey complex technical concepts to diverse audiences. Preferred Qualifications Experience in setting technical direction and strategy for AI/ML projects. Experience in the Insurance domain Ability to mentor and coach junior team members, fostering growth and development. Proven track record of successfully managing AI/ML projects from conception to deployment.
Posted 1 week ago
0 years
0 Lacs
Gandhinagar, Gujarat, India
On-site
Roles & Responsibilities : Annotate and label datasets accurately using specialized tools and guidelines Review and correct existing annotations to ensure data quality Collaborate with machine learning engineers and data scientists to understand annotation requirements Follow detailed instructions and apply judgment to edge cases and ambiguous data Meet project deadlines and maintain high levels of accuracy and efficiency Provide feedback to improve annotation guidelines and workflows Participate in training sessions to stay updated on evolving tools and techniques Requirements : BA, BBA, B.Com, B.Tech, BCA, and other Management streams Strong attention to detail and ability to follow complex instructions Basic computer skills and familiarity with data entry or annotation tools Good communication skills and the ability to work independently or in a team Freshers are eligible to apply for the role. Experience with data labeling tools (e.g., Labelbox, CVAT, Scale AI, Amazon SageMaker Ground Truth) is a plus Familiarity with AI/ML concepts is a bonus Perks and Benefits Salary: 2.5 LPA - 3.0 LPA Medicare Benefits Both side cab facilities. Medical Insurance Life Insurance
Posted 1 week ago
0.0 - 3.0 years
3 - 12 Lacs
Mohali, Punjab
On-site
We are seeking a skilled and experienced DevOps Engineer with expertise in architecting, implementing, and managing hybrid cloud infrastructure to enable seamless deployment and scaling of high-performance applications and machine learning workloads. Proven experience in cloud services, on-premises systems, container orchestration, automation, and multi-database management. Key Responsibilities & Experience : - Designed, implemented, and managed scalable AWS infrastructure leveraging services such as EC2, ECS, Lambda, S3, DynamoDB, Cognito, SageMaker, Amazon ECR, SES, Route 53, VPC Peering, and Site-to-Site VPN to support secure, high-performance, and resilient cloud environments. - Applied best practices in network security, including firewall configuration, IAM policy management. - Architected and maintained large-scale, multi-database systems integrating PostgreSQL, MongoDB, DynamoDB, and Elasticsearch to support millions of records, low-latency search, and real-time analytics. - Built and maintained CI/CD pipelines using GitHub Actions and Jenkins, enabling automated testing, Docker builds, and seamless deployments to production. - Managed containerized deployments using Docker, and orchestrated services using Amazon ECS for scalable and resilient application environments. - Implemented and maintained IaC frameworks using Terraform, AWS CloudFormation, and Ansible to ensure consistent, repeatable, and scalable infrastructure deployments. - Developed Ansible playbooks to automate system provisioning, OS-level configurations, and application deployments across hybrid environments. - Configured Amazon CloudWatch and Zabbix for proactive monitoring, health checks, and custom alerts to maintain system reliability and uptime. - Administered Linux-based servers, applied system hardening techniques, and maintained OS-level and network security best practices. - Managed SSL/TLS certificates, configured DNS records, and integrated email services using Amazon SES and SMTP tools. - Deployed and managed infrastructure for ML workloads using AWS SageMaker, optimizing model training, hosting, and resource utilization for cost-effective performance. Preferred Qualifications : - 3+ years of experience in DevOps, Cloud Infrastructure - Bachelors degree in Computer Science, Engineering - Experience deploying and managing machine learning models - Hands-on experience managing multi-node Elasticsearch clusters and designing scalable, high-performance search infrastructure. - Experience designing and operating hybrid cloud architectures, integrating on-premises and cloud-based systems Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,200,000.00 per year Benefits: Flexible schedule Paid sick time Paid time off Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: DevOps: 3 years (Required) Work Location: In person Speak with the employer +91 8360518086
Posted 1 week ago
6.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Role: Lead Python/AI Developer Experience: 6/6+ Years Location: Ahmedabad (Gujarat) Roles and Responsibilities: • Helping the Python/AI team in building Python/AI solutions architectures leveraging source technologies • Driving the technical discussions with clients along with Project Managers. • Creating Effort Estimation matrix of Solutions/Deliverables for Delivery Team • Implementing AI solutions and architectures, including data pre-processing, feature engineering, model deployment, compatibility with downstream tasks, edge/error handling. • Collaborating with cross-functional teams, such as machine learning engineers, software engineers, and product managers, to identify business needs and provide technical guidance. • Mentoring and coaching junior Python/AI/ML engineers. • Sharing knowledge through knowledge-sharing technical presentations. • Implement new Python/AI features with high quality coding standards. Must-To Have: • B.Tech/B.E. in computer science, IT, Data Science, ML or related field. • Strong proficiency in Python programming language. • Strong Verbal, Written Communication Skills with Analytics and Problem-Solving. • Proficient in Debugging and Exception Handling • Professional experience in developing and operating AI systems in production. • Hands-on, strong programming skills with experience in python, in particular modern ML & NLP frameworks (scikit-learn, pytorch, tensorflow, huggingface, SpaCy, Facebook AI XLM/mBERT etc.) • Hands-on experience with AWS services such as EC2, S3, Lambda, AWS SageMaker. • Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD. • Comfort with essential tools & libraries: Git, Docker, GitHub, Postman, NumPy, SciPy, Matplotlib, Seaborn, or Plotly, Pandas. • Prior Experience in relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB). • Experience in working in Agile methodology Good-To Have: • A Master’s degree or Ph.D. in Computer Science, Machine Learning, or a related quantitative field. • Python framework (Django/Flask/Fast API) & API integration. • AI/ML/DL/MLOops certification done by AWS. • Experience with OpenAI API. • Good in Japanese Language
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Key Responsibilities: Machine Learning Solution Development: Design, develop and deploy ML models, algorithms and agentic AI systems to address complex business challenges across a range of sectors. Cloud & MLOps Management: Lead the implementation of ML solutions on AWS cloud (with heavy use of Amazon SageMaker and related AWS services). Develop and maintain end-to-end CI/CD pipelines for ML projects, using infrastructure-as-code tools like AWS CloudFormation and Terraform to automate model deployment and system setup. Project Leadership: Oversee the ML lifecycle from data preparation to model training, validation, and deployment. Make high-level design decisions on model architecture and data pipelines. Mentor junior engineers and collaborate with data scientists, ML engineers, and Software Engineering teams to ensure successful delivery of ML projects. Client & Stakeholder Collaboration: Collaborate with project managers and stakeholders across a range of sectors to gather requirements and translate business needs into technical solutions. Present findings and ML model results to non-technical audiences in a clear manner, and refine solutions based on their feedback. Quality, Security & Compliance: Ensure that ML solutions meet quality and performance standards. Implement monitoring and logging for models in production, and proactively improve model accuracy and efficiency. Given the sensitive nature of our data, enforce data security best practices and compliance with relevant regulations (e.g. data privacy and confidentiality) in all ML workflows. Required Qualifications & Experience: Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or related field. Strong foundation in statistics and algorithms is expected. Experience: 5+ years of hands-on experience in machine learning or data science roles, with a track record of building and deploying ML models into production. Prior experience leading projects or teams is a plus for a lead role. Programming & ML Skills: Advanced programming skills in Python (including libraries such as pandas, scikit-learn, TensorFlow/PyTorch). Solid understanding of ML algorithms, model evaluation techniques, and optimisation. Experience with NLP techniques, generative AI or financial data modelling is advantageous. Cloud & DevOps: Proven experience with AWS cloud services relevant to data science – particularly Amazon SageMaker for model development and deployment. Familiarity with data storage and processing on AWS (S3, AWS Lambda, Athena/Redshift, etc.) is expected. Strong knowledge of DevOps/MLOps practices – candidates should have built or worked with CI/CD pipelines for ML, using tools like Docker and Jenkins, and infrastructure-as-code tools like CloudFormation or Terraform to automate deployments. Hybrid Work Skills: Ability to thrive in a hybrid work environment – should be self-motivated and communicative when working remotely, and effective at in-person collaboration during on-site days. (The role will be based in Chennai with a mix of remote and office work.) Soft Skills: Excellent problem-solving and analytical thinking. Strong communication skills to explain complex ML concepts to clients or management. Ability to work under tight deadlines and multitask across projects for different clients. A client-focused mindset is essential, as the role involves understanding and addressing the needs of large clients who come to us because they trust us.
Posted 1 week ago
8.0 years
0 Lacs
India
Remote
About Cloud202 Cloud202 is an AWS Advanced Services and Differentiated Software Partner specializing in Generative AI-driven modernization and cloud transformation. We serve startups and SMBs primarily in the UKI region, helping them leverage AWS Cloud capabilities through our proprietary AI accelerator platform, Our AI Delivery & Ops platform, and comprehensive consulting services. Role Overview We are seeking an experienced Delivery Head - Cloud & AI to lead technical project delivery for our growing portfolio of cloud migration, AI implementation, and modernization projects. This role requires a seasoned professional who can bridge the gap between complex technical requirements and successful project outcomes while managing distributed teams and client relationships. Key ResponsibilitiesProject Delivery & Management Lead end-to-end delivery of complex cloud migration, AI implementation, and modernization projects for UKI startups and SMBs Oversee technical delivery of the key sales plays: WAFR with GenAI Lens, Migration & Modernization, GenAI Projects using Our AI Delivery & Ops platform Coordinate with cross-functional teams including sales, customer success, and technical teams to ensure seamless project execution Establish and maintain project governance frameworks, ensuring adherence to timelines, budgets, and quality standards Technical Leadership Design and architect cloud solutions leveraging AWS services (Bedrock, SageMaker, CloudFormation, EKS) aligned with customer requirements Drive Our AI Delivery & Ops platform adoption and customization for client-specific AI use cases and industry requirements Lead technical assessments and provide recommendations for cloud optimization, cost efficiency, and performance improvements Ensure compliance with AWS Well-Architected Framework principles and security best practices Mentor and guide junior technical team members and offshore development teams Client & Stakeholder Management Serve as primary technical point of contact for key enterprise clients during project delivery phases Conduct technical workshops and training sessions for client teams on AWS cloud services and AI implementations Manage stakeholder expectations and provide regular project updates to both internal leadership and external clients Drive customer satisfaction and ensure successful project outcomes that lead to long-term partnerships Strategic Initiatives Contribute to business development by providing technical input for proposals and solution design Develop reusable frameworks and best practices for rapid project delivery and scalability Stay current with emerging technologies and integrate relevant innovations into project delivery methodologies Required QualificationsTechnical Expertise 8+ years of experience in cloud project delivery, with minimum 5 years focused on AWS cloud services Strong expertise in AWS services including EC2, S3, Lambda, CloudFormation, EKS, Bedrock, SageMaker, and other AI/ML services Proven experience with Generative AI implementation, LLM deployment, and AI model optimization Hands-on experience with cloud migration methodologies, modernization frameworks, and DevOps practices Proficiency in Infrastructure as Code (CloudFormation or Terraform) and containerization technologies Understanding of AI governance , model selection, and scalability considerations for enterprise environments Project Management & Leadership Demonstrated experience managing multiple complex technical projects simultaneously Strong leadership skills with experience managing distributed teams across different time zones Excellent stakeholder management abilities with both technical and business audiences Experience with Agile/Scrum methodologies and modern project management tools Industry & Compliance Knowledge AWS certifications required: Solutions Architect Professional, DevOps Engineer Professional, or equivalent Experience with compliance frameworks including SOC2, ISO27001, GDPR considerations for cloud implementations Understanding of cost optimization strategies and FinOps principles for cloud environments Knowledge of industry-specific requirements for startups and SMB cloud adoption challenges Preferred Qualifications Previous experience working with UKI market clients and understanding of regional business requirements Advanced degree in Computer Science, Engineering, or related technical field Key Performance Indicators Project delivery success rate (on-time, on-budget, meeting quality standards) Customer satisfaction scores (CSAT) maintaining >4.5/5.0 average Revenue delivery against assigned project portfolio targets Team efficiency metrics including resource utilization and delivery velocity Client retention and expansion rates for managed accounts What We Offer Competitive salary with performance-based bonuses Comprehensive benefits package including health insurance and professional development budget AWS training and certification support with full expense coverage Flexible working arrangements with remote/hybrid options Career growth opportunities in a rapidly expanding AWS partner organization Direct collaboration with AWS field teams and access to partner resources
Posted 1 week ago
8.0 years
0 Lacs
Sholinganallur, Tamil Nadu, India
On-site
Role: MLE + Vertex AI Mode : Permanent - Full time Exp: 4- 8 years Job Description: The candidate should be a self-starter and be able to contribute independently in the absence of any guidance. strong vertex ai experience, moving multiple MLE workloads on to vertex ai is a pre-requisite. The client is not looking to act as guides/mentors. “They are seeking an MLE with hands-on experience in delivering machine learning solutions using Vertex AI and strong Python skills. The person must have 5+ years of experience, with 3+ in MLE. Advanced knowledge of machine learning, engineering industry frameworks, and professional standards. Demonstrated proficiency using cloud technologies and integrating with ML services including GCP Vertex AI, DataRobot or AWS SageMaker in large and complex organisations and experience with SQL and Python environments. Experience in Technology delivery, waterfall and agile. Python and SQL skills. Experience with distributed programming (e.g. Apache Spark, Pyspark) . Software engineering experience/skills. Experience working with big data cloud platforms (Azure, Google Cloud Platform, AWS). DevOps experience. CI/CD experience. Experience with Unit Testing, TDD. Experience with Infrastructure as code. Direct client interaction. Must Have Skills: Vertex AI, MLE, AWS, PYTHON, SQL Interested candidates can reach us @7338773388 or careers@w2ssolutions.com & hr@way2smile.com
Posted 1 week ago
6.0 years
5 - 15 Lacs
Jodhpur Char Rasta, Ahmedabad, Gujarat
On-site
Role: Lead Python/AI Developer Experience: 6/6+ Years Location: Ahmedabad (Gujarat) Roles and Responsibilities: Helping the Python/AI team in building Python/AI solutions architectures leveraging source technologies Driving the technical discussions with clients along with Project Managers. Creating Effort Estimation matrix of Solutions/Deliverables for Delivery Team Implementing AI solutions and architectures, including data pre-processing, feature engineering, model deployment, compatibility with downstream tasks, edge/error handling. Collaborating with cross-functional teams, such as machine learning engineers, software engineers, and product managers, to identify business needs and provide technical guidance. Mentoring and coaching junior Python/AI/ML engineers. Sharing knowledge through knowledge-sharing technical presentations. Implement new Python/AI features with high quality coding standards. Must-To Have: B.Tech/B.E. in computer science, IT, Data Science, ML or related field. Strong proficiency in Python programming language. Strong Verbal, Written Communication Skills with Analytics and Problem-Solving. Proficient in Debugging and Exception Handling Professional experience in developing and operating AI systems in production. Hands-on, strong programming skills with experience in python, in particular modern ML & NLP frameworks (scikit-learn, pytorch, tensorflow, huggingface, SpaCy, Facebook AI XLM/mBERT etc.) Hands-on experience with AWS services such as EC2, S3, Lambda, AWS SageMaker. Experience with collaborative development workflow: version control (we use github), code reviews, DevOps (incl automated testing), CI/CD. Comfort with essential tools & libraries: Git, Docker, GitHub, Postman, NumPy, SciPy, Matplotlib, Seaborn, or Plotly, Pandas. Prior Experience in relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB). Experience in working in Agile methodology Good-To Have: A Master’s degree or Ph.D. in Computer Science, Machine Learning, or a related quantitative field. Python framework (Django/Flask/Fast API) & API integration. AI/ML/DL/MLOops certification done by AWS. Experience with OpenAI API. Good in Japanese Language Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,500,000.00 per year Benefits: Provident Fund Work Location: In person Expected Start Date: 14/08/2025
Posted 1 week ago
2.0 - 5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Title: Data Scientist Job Location: Jaipur Experience: 2 to 5 years Job Description: We are seeking a highly skilled and innovative Data Scientist to join our dynamic and forward-thinking team. This role is ideal for someone who is passionate about advancing the fields of Classical Machine Learning, Conversational AI, and Deep Learning Systems, and thrives on translating complex mathematical challenges into actionable machine learning models. The successful candidate will focus on developing, designing, and maintaining cutting-edge AI-based systems, ensuring seamless and engaging user experiences. Additionally, the role involves active participation in a wide variety of Natural Language Processing (NLP) tasks, including refining and optimizing prompts to enhance the performance of Large Language Models (LLMs). Key Responsibilities: • Generative AI Solutions: Develop innovative Generative AI solutions using machine learning and AI technologies, including building and fine-tuning models such as GANs, VAEs, and Transformers. • Classical ML Models: Design and develop machine learning models (regression, decision trees, SVMs, random forests, gradient boosting, clustering, dimensionality reduction) to address complex business challenges. • Deep Learning Systems: Train, fine-tune, and deploy deep learning models such as CNNs, RNNs, LSTMs, GANs, and Transformers to solve AI problems and optimize performance. • NLP and LLM Optimization: Participate in Natural Language Processing activities, refining and optimizing prompts to improve outcomes for Large Language Models (LLMs), such as GPT, BERT, and T5. • Data Management s Feature Engineering: Work with large datasets, perform data preprocessing, augmentation, and feature engineering to prepare data for machine learning and deep learning models. • Model Evaluation s Monitoring: Fine-tune models through hyperparameter optimization (grid search, random search, Bayesian optimization) to improve performance metrics (accuracy, precision, recall, F1-score). Monitor model performance to address drift, overfitting, and bias. • Code Review s Design Optimization: Participate in code and design reviews, ensuring quality and scalability in system architecture and development. Work closely with other engineers to review algorithms, validate models, and improve overall system efficiency. • Collaboration s Research: Collaborate with cross-functional teams including data scientists, engineers, and product managers to integrate machine learning solutions into production. Stay up to date with the latest AI/ML trends and research, applying cutting-edge techniques to projects. Qualifications: • Educational Background: Bachelor’s or Master’s degree in Computer Science, Mathematics, Statistics, Data Science, or any related field. • Experience in Machine Learning: Extensive experience in both classical machine learning techniques (e.g., regression, SVM, decision trees) and deep learning systems (e.g., neural networks, transformers). Experience with frameworks such as TensorFlow, PyTorch, or Keras. • Natural Language Processing Expertise: Proven experience in NLP, especially with Large Language Models (LLMs) like GPT, BERT, or T5. Experience in prompt engineering, fine-tuning, and optimizing model outcomes is a strong plus. • Programming Skills: Proficiency in Python and relevant libraries such as NumPy, Pandas, Scikit-learn, and natural language processing libraries (e.g., Hugging Face Transformers, NLTK, SpaCy). • Mathematical s Statistical Knowledge: Strong understanding of statistical modeling, probability theory, and mathematical optimization techniques used in machine learning. • Model Deployment s Automation: Experience with deploying machine learning models into production environments using platforms such as AWS SageMaker or Azure ML, GCP AI, or similar. Familiarity with MLOps practices is an advantage. • Code Review s System Design: Experience in code review, design optimization, and ensuring quality in large-scale AI/ML systems. Understanding of distributed computing and parallel processing is a plus. Soft Skills s Behavioural Qualifications: • Must be a good team player and self-motivated to achieve positive results • Must have excellent communication skills in English. • Exhibits strong presentation skills with attention to detail. • It’s essential to have a strong aptitude for learning new techniques. • Takes ownership for responsibilities • Demonstrates a high degree of reliability, integrity, and trustworthiness • Ability to manage time, displays appropriate sense of urgency and meet/exceed all deadlines • Ability to accurately process high volumes of work within established deadlines. Interested candidate can share your cv or reference at sulabh.tailang@celebaltech.com
Posted 1 week ago
10.0 - 15.0 years
6 - 11 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities Develop and train machine learning models using Python, scikit-learn, and other ML frameworks. Conduct data wrangling, analysis, and feature engineering using Pandas, SQL, and IPython/Jupyter Notebooks. Architect and implement reproducible data pipelines with Kedro. Build and integrate intelligent applications leveraging Langchain and LLM ecosystems. Develop and deploy scalable ML services on AWS, utilizing services like S3, EC2, Lambda, SageMaker, etc. Containerize applications using Docker and orchestrate deployments with Kubernetes. Manage infrastructure as code using Terraform for provisioning cloud resources. Set up CI/CD pipelines for ML workflows using Jenkins or similar tools. Collaborate using Git for version control and code reviews. Monitor and optimize ML models in production for performance, drift, and reliability. Document technical designs, processes, and results to ensure reproducibility and knowledge sharing. Required Skills and Experience 10+ years of experience in Machine Learning Engineering, Data Engineering, or a similar technical role. Proficiency in Python and solid experience with libraries like scikit-learn, Pandas, and IPython/Jupyter Notebook for data analysis and modelling. Experience developing and orchestrating ML pipelines using Kedro or similar tools. Hands-on expertise deploying applications and ML services on AWS. Strong understanding of Docker and container-based deployments. Practical experience with Kubernetes for orchestration and scaling of ML workloads. Knowledge of Terraform for managing infrastructure as code. Experience setting up and managing CI/CD pipelines using Jenkins or equivalent tools. Solid SQL skills for data extraction, transformation, and analysis. Experience working with Langchain and LLM-based solutions is a significant plus. Familiarity with Git-based development workflows. Strong problem-solving skills and the ability to work independently and collaboratively. Nice to Have Experience with MLOps best practices (e.g., model versioning, monitoring, feature stores). Exposure to additional cloud platforms (e.g., GCP, Azure). Familiarity with security and compliance considerations in ML deployments. Experience working in Agile teams and collaborating closely with cross-functional stakeholders. At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We re committed to fostering an inclusive environment where everyone can thrive.
Posted 1 week ago
1.0 - 9.0 years
0 - 2 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities Develop, train, and evaluate machine learning models using Python, scikit-learn, and related libraries. Design and build robust data pipelines and workflows leveraging Pandas, SQL, and Kedro. Create clear, reproducible analysis and reports in Jupyter Notebooks. Integrate machine learning models and data pipelines into production environments on AWS. Work with Langchain to build applications leveraging large language models and natural language processing workflows. Collaborate closely with data engineers, product managers, and business stakeholders to understand requirements and deliver impactful solutions. Optimize and monitor model performance in production and drive continuous improvement. Follow best practices for code quality, version control, and documentation. Required Skills and Experience 7+ years of professional experience in Data Science, Machine Learning, or a related field. Strong proficiency in Python and machine learning frameworks, especially scikit-learn. Deep experience working with data manipulation and analysis tools such as Pandas and SQL. Hands-on experience creating and sharing analyses in Jupyter Notebooks. Solid understanding of cloud services, particularly AWS (S3, EC2, Lambda, SageMaker, etc. ). Experience with Kedro for pipeline development and reproducibility. Familiarity with Langchain and building applications leveraging LLMs is a strong plus. Ability to communicate complex technical concepts clearly to non-technical audiences. Strong problem-solving skills and a collaborative mindset. Nice to Have Experience with MLOps tools and practices (model monitoring, CI/CD pipelines for ML). Exposure to other cloud platforms (GCP, Azure). Knowledge of data visualization libraries (e. g. , Matplotlib, Seaborn, Plotly). Familiarity with modern LLM ecosystems and prompt engineering. At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We re committed to fostering an inclusive environment where everyone can thrive.
Posted 1 week ago
2.0 - 8.0 years
0 - 2 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities Develop, train, and evaluate machine learning models using Python, scikit-learn, and related libraries. Design and build robust data pipelines and workflows leveraging Pandas, SQL, and Kedro. Create clear, reproducible analysis and reports in Jupyter Notebooks. Integrate machine learning models and data pipelines into production environments on AWS. Work with Langchain to build applications leveraging large language models and natural language processing workflows. Collaborate closely with data engineers, product managers, and business stakeholders to understand requirements and deliver impactful solutions. Optimize and monitor model performance in production and drive continuous improvement. Follow best practices for code quality, version control, and documentation. Required Skills and Experience 7+ years of professional experience in Data Science, Machine Learning, or a related field. Strong proficiency in Python and machine learning frameworks, especially scikit-learn. Deep experience working with data manipulation and analysis tools such as Pandas and SQL. Hands-on experience creating and sharing analyses in Jupyter Notebooks. Solid understanding of cloud services, particularly AWS (S3, EC2, Lambda, SageMaker, etc. ). Experience with Kedro for pipeline development and reproducibility. Familiarity with Langchain and building applications leveraging LLMs is a strong plus. Ability to communicate complex technical concepts clearly to non-technical audiences. Strong problem-solving skills and a collaborative mindset. Nice to Have Experience with MLOps tools and practices (model monitoring, CI/CD pipelines for ML). Exposure to other cloud platforms (GCP, Azure). Knowledge of data visualization libraries (e. g. , Matplotlib, Seaborn, Plotly). Familiarity with modern LLM ecosystems and prompt engineering. At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We re committed to fostering an inclusive environment where everyone can thrive.
Posted 1 week ago
0.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Designation: Assistant Manager – Data Science Level: L3 Experience: 5 to 10 years Location: Chennai Job Description: We are seeking a highly skilled and motivated Senior Data Scientist who thrives in a dynamic environment and is ready to take on challenging, high-impact projects in a fast-growing, collaborative team working at the cutting edge of analytics in the Financial Services domain. Responsibilities: Partner with stakeholders to translate complex business requirements into Machine Learning problem statements. Work with cross-functional teams to ensure data availability, quality, and accessibility. Design, build, and deploy scalable Machine Learning and Deep Learning models, particularly for heavily imbalanced datasets. Collaborate with the MLOps team to develop robust data pipelines and ensure seamless model integration. Present findings and insights to senior stakeholders in a clear, business-friendly manner. Mentor and guide junior analysts, helping cultivate a strong data-driven culture within the team. Skills: 5 - 10 years of experience in data science and machine learning. Highly proficient in SQL for data preparation and manipulation. Highly proficient in Python and in developing and deploying Machine Learning and Deep Learning models. Ability to interpret model outputs and translate data into actionable business strategies. Excellent communication and interpersonal skills to work across functions and present to diverse stakeholders. Bachelor's or Master's degree in Computer Science, Statistics, Mathematics, or a related quantitative field. Nice to Have Skills: Experience in using Pyspark for implementing and deploying the models. Experience in payments domain and fraud modeling using feedzai. Exposure to MLOps and model deployment in a production environment using AWS with experience in S3, Sagemaker, Feature Store, Cloudwatch, Event bridge, Athena. Experience working with Gitlab Job Snapshot Updated Date 23-07-2025 Job ID J_3908 Location Chennai, Tamil Nadu, India Experience 5 - 10 Years Employee Type Permanent
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Skill required: Delivery - Advanced Analytics Designation: I&F Decision Science Practitioner Specialist Qualifications: Master of Engineering,Masters in Business Economics Years of Experience: 7 to 11 Years About Accenture Accenture is a global professional services company with leading capabilities in digital, cloud and security.Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song— all powered by the world’s largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. We embrace the power of change to create value and shared success for our clients, people, shareholders, partners and communities.Visit us at www.accenture.com What would you do? Data & AI. You will be a core member of Accenture Operations global Data & AI group, an energetic, strategic, high-visibility and high-impact team, to innovate and transform the Accenture Operations business using machine learning, advanced analytics to support data-driven decisioning. What are we looking for? Extensive experience in leading Data Science and Advanced Analytics delivery teams Strong statistical programming experience – Python or working knowledge on cloud native platforms like AWS Sagemaker is preferred Azure/ GCP Experience working with large data sets and big data tools like AWS, SQL, PySpark, etc. Solid knowledge in at least more than two of the following – Supervised and Unsupervised Learning, Classification, Regression, Clustering, Neural Networks, Ensemble Modelling (random forest, boosted tree, etc) Experience in working with Pricing models is a plus Experience in atleast one of these business domains: Energy, CPG, Retail, Marketing Analytics, Customer Analytics, Digital Marketing, eCommerce, Health, Supply Chain Extensive experience in client engagement and business development Ability to work in a global collaborative team environment Quick Learner and Independently deliver results.Qualifications: Masters / Ph.D. Computer science, Engineering, Statistics, Mathematics, Economics or related disciplines. Roles and Responsibilities: Leading team of data scientists to build and deploy data science models to uncover deeper insights, predict future outcomes, and optimize business processes for clients. Refining and improving data science models based on feedback, new data, and evolving business needs. Analyze available data to identify opportunities for enhancing brand equity, improving retail margins, achieving profitable growth, and expanding market share for clients. Data Scientists in Operations follow multiple approaches for project execution from adapting existing assets to Operations use cases, exploring third-party and open-source solutions for speed to execution and for specific use cases to engaging in fundamental research to develop novel solutions. Data Scientists are expected to collaborate with other data scientists, subject matter experts, sales, and delivery teams from Accenture locations around the globe to deliver strategic advanced machine learning / data-AI solutions from design to deployment.
Posted 1 week ago
3.0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
We are seeking a skilled and experienced DevOps Engineer with expertise in architecting, implementing, and managing hybrid cloud infrastructure to enable seamless deployment and scaling of high-performance applications and machine learning workloads. Proven experience in cloud services, on-premises systems, container orchestration, automation, and multi-database management. Key Responsibilities & Experience Designed, implemented, and managed scalable AWS infrastructure leveraging services such as EC2, ECS, Lambda, S3, DynamoDB, Cognito, SageMaker, Amazon ECR, SES, Route 53, VPC Peering, and Site-to-Site VPN to support secure, high-performance, and resilient cloud environments. Applied best practices in network security, including firewall configuration, IAM policy management. Architected and maintained large-scale, multi-database systems integrating PostgreSQL, MongoDB, DynamoDB, and Elasticsearch to support millions of records, low-latency search, and real-time analytics. Built and maintained CI/CD pipelines using GitHub Actions and Jenkins, enabling automated testing, Docker builds, and seamless deployments to production. Managed containerized deployments using Docker, and orchestrated services using Amazon ECS for scalable and resilient application environments. Implemented and maintained IaC frameworks using Terraform, AWS CloudFormation, and Ansible to ensure consistent, repeatable, and scalable infrastructure deployments. Developed Ansible playbooks to automate system provisioning, OS-level configurations, and application deployments across hybrid environments. Configured Amazon CloudWatch and Zabbix for proactive monitoring, health checks, and custom alerts to maintain system reliability and uptime. Administered Linux-based servers, applied system hardening techniques, and maintained OS-level and network security best practices. Managed SSL/TLS certificates, configured DNS records, and integrated email services using Amazon SES and SMTP tools. Deployed and managed infrastructure for ML workloads using AWS SageMaker, optimizing model training, hosting, and resource utilization for cost-effective performance. Preferred Qualifications 3+ years of experience in DevOps, Cloud Infrastructure Bachelors degree in Computer Science, Engineering Experience deploying and managing machine learning models Hands-on experience managing multi-node Elasticsearch clusters and designing scalable, high-performance search infrastructure. Experience designing and operating hybrid cloud architectures, integrating on-premises and cloud-based systems (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
About the Role: We are looking for a highly skilled and experienced Machine Learning / AI Engineer to join our team at Zenardy. The ideal candidate needs to have a proven track record of building, deploying, and optimizing machine learning models in real-world applications. You will be responsible for designing scalable ML systems, collaborating with cross-functional teams, and driving innovation through AI-powered solutions. Location: Chennai & Hyderabad Key Responsibilities: Design, develop, and deploy machine learning models to solve complex business problems Work across the full ML lifecycle: data collection, preprocessing, model training, evaluation, deployment, and monitoring Collaborate with data engineers, product managers, and software engineers to integrate ML models into production systems Conduct research and stay up-to-date with the latest ML/AI advancements, applying them where appropriate Optimize models for performance, scalability, and robustness Document methodologies, experiments, and findings clearly for both technical and non-technical audiences Mentor junior ML engineers or data scientists as needed Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Machine Learning, Data Science, or related field Minimum of 3 hands-on ML/AI projects, preferably in production or with real-world datasets Proficiency in Python and ML libraries/frameworks like TensorFlow, PyTorch, Scikit-learn, XGBoost Solid understanding of core ML concepts: supervised/unsupervised learning, neural networks, NLP, computer vision, etc. Experience with model deployment using APIs, containers (Docker), cloud platforms (AWS/GCP/Azure) Strong data manipulation and analysis skills using Pandas, NumPy, and SQL Knowledge of software engineering best practices: version control (Git), CI/CD, unit testing Preferred Skills: Experience with MLOps tools (MLflow, Kubeflow, SageMaker, etc.) Familiarity with big data technologies like Spark, Hadoop, or distributed training frameworks Experience working in Fintech environments would be a plus Strong problem-solving mindset with excellent communication skills Experience in working with vector database. Understanding of RAG vs Fine-tuning vs Prompt Engineering Why Join Us: Work on impactful, real-world AI challenges Collaborate with a passionate and innovative team Opportunities for career advancement and learning Flexible work environment (remote/hybrid options) Competitive compensation and benefits To Apply: Please send your resume, portfolio (if applicable), and a brief summary of your ML/AI projects to ranjana.g@zenardy.com
Posted 1 week ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position: AI Architect Location: Hyderabad Experience: 15+ Years (with 3–5 years in AI/ML-focused roles) Employment Type: Full-Time About the Role: We are looking for a visionary AI Architect to lead the design and deployment of advanced AI solutions. You’ll work closely with data scientists, engineers, and product teams to translate business needs into scalable, intelligent systems. Key Responsibilities: Design end-to-end AI architectures including data pipelines, ML/DL models, APIs, and deployment frameworks Evaluate AI technologies, frameworks, and platforms to meet business and technical needs Collaborate with cross-functional teams to gather requirements and translate them into AI use cases Build scalable AI systems with focus on performance, robustness, and cost-efficiency Implement MLOps pipelines to streamline model lifecycle Ensure AI governance, data privacy, fairness, and explainability across deployments Mentor and guide engineering and data science teams Required Skills: Expertise in Machine Learning / Deep Learning frameworks (e.g., TensorFlow, PyTorch, Scikit-learn) Strong proficiency in Python and one or more of: R, Java, Scala Deep understanding of cloud platforms (AWS, Azure, GCP) and tools like SageMaker, Bedrock, Vertex AI, Azure ML Familiarity with MLOps tools : MLflow, Kubeflow, Airflow, Docker, Kubernetes Solid understanding of data architecture , APIs , and microservices Knowledge of NLP, computer vision, and generative AI is a plus Excellent communication and leadership skills Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, AI, or a related field AI/ML certifications (AWS, Azure, Coursera, etc.) are a bonus Experience working with LLMs, RAG architectures, or GenAI platforms is a plus
Posted 1 week ago
2.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
The Senior Statistical Data Analyst is responsible for designing unique analytic approaches to detect, assess, and recommend the optimal customer treatment to reduce frictions and enhance experience while properly managing fraud risk with data driven and statistical methods. You will analyze large amounts of account and transaction data to build customer level insights to derive the recommendations and methods to reduce friction and enhance experience on fund availability, transaction/fund hold time and more, and models while managing the customer experience. This role requires critical thinking and analytical savviness to work in a fast-paced environment but can be a rewarding opportunity to help bring a great banking experience and empower the customers to achieve their financial goals. Responsibilities Analyze large amounts of data/transactions to derive business insights and create innovative solutions/models/strategies. Aggregate and analyze internal and external risk datasets to understand performance of fraud risk at customer level. Analyze customer's banking/transaction behaviors, and be able to build predictive models (simple ones like logistic regression, linear regression) to predict churns or negative outcomes or running correlation analysis to understand the correlation. Develop personalized segmentations and micro-segmentation to identify customers based on their fraud risk, banking behaviorals, and value. Conduct analysis for data driven recommendations with reporting dashboard to optimize customer treatment regarding friction reduction and fund availability across the entire banking journey. Skillset Analytics professional preferably with experience in Fraud analytics. Minimum 2 years of experience in relevant domain - Data Analysis and building models/strategies. Strong knowledge and working experience in SQL and Python is a must. Experience analyzing data with statistical approaches with python (e.g. in Jupyter notebook): for example, clustering analysis, decision trees, linear regression, logistic regression, correlation analysis Knowledge of Tableau and BI tools Hands-on use of AWS (e.g. S3, EC2, EMR, Athena, SageMaker and more) is a plus Strong communication and interpersonal skills Strong knowledge of financial products, including debit cards, credit cards, lending products, and deposit accounts is a plus. Experience working at a FinTech or start-up is a plus. Notice period : Max 60 days. immediate joiners preferred. Education Bachelors or Masters in Quantitative field such as Economics, Statistics, Mathematics BTech/MTech/MBA from Tier 1 colleges (IIT, NIT, IIM)
Posted 1 week ago
7.0 - 12.0 years
22 - 25 Lacs
India
On-site
TECHNICAL ARCHITECT Key Responsibilities 1. Designing technology systems: Plan and design the structure of technology solutions, and work with design and development teams to assist with the process. 2. Communicating: Communicate system requirements to software development teams, and explain plans to developers and designers. They also communicate the value of a solution to stakeholders and clients. 3. Managing Stakeholders: Work with clients and stakeholders to understand their vision for the systems. Should also manage stakeholder expectations. 4. Architectural Oversight: Develop and implement robust architectures for AI/ML and data science solutions, ensuring scalability, security, and performance. Oversee architecture for data-driven web applications and data science projects, providing guidance on best practices in data processing, model deployment, and end-to-end workflows. 5. Problem Solving: Identify and troubleshoot technical problems in existing or new systems. Assist with solving technical problems when they arise. 6. Ensuring Quality: Ensure if systems meet security and quality standards. Monitor systems to ensure they meet both user needs and business goals. 7. Project management: Break down project requirements into manageable pieces of work, and organise the workloads of technical teams. 8. Tool & Framework Expertise: Utilise relevant tools and technologies, including but not limited to LLMs, TensorFlow, PyTorch, Apache Spark, cloud platforms (AWS, Azure, GCP), Web App development frameworks and DevOps practices. 9. Continuous Improvement: Stay current on emerging technologies and methods in AI, ML, data science, and web applications, bringing insights back to the team to foster continuous improvement. Technical Skills 1. Proficiency in AI/ML frameworks such as TensorFlow, PyTorch, Keras, and scikit-learn for developing machine learning and deep learning models. 2. Knowledge or experience working with self-hosted or managed LLMs. 3. Knowledge or experience with NLP tools and libraries (e.g., SpaCy, NLTK, Hugging Face Transformers) and familiarity with Computer Vision frameworks like OpenCV and related libraries for image processing and object recognition. 4. Experience or knowledge in back-end frameworks (e.g., Django, Spring Boot, Node.js, Express etc.) and building RESTful and GraphQL APIs. 5. Familiarity with microservices, serverless, and event-driven architectures. Strong understanding of design patterns (e.g., Factory, Singleton, Observer) to ensure code scalability and reusability. 6. Proficiency in modern front-end frameworks such as React, Angular, or Vue.js, with an understanding of responsive design, UX/UI principles, and state management (e.g., Redux) 7. In-depth knowledge of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra), as well as caching solutions (e.g., Redis, Memcached). 8. Expertise in tools such as Apache Spark, Hadoop, Pandas, and Dask for large-scale data processing. 9. Understanding of data warehouses and ETL tools (e.g., Snowflake, BigQuery, Redshift, Airflow) to manage large datasets. 10. Familiarity with visualisation tools (e.g., Tableau, Power BI, Plotly) for building dashboards and conveying insights. 11. Knowledge of deploying models with TensorFlow Serving, Flask, FastAPI, or cloud-native services (e.g., AWS SageMaker, Google AI Platform). 12. Familiarity with MLOps tools and practices for versioning, monitoring, and scaling models (e.g., MLflow, Kubeflow, TFX). 13. Knowledge or experience in CI/CD, IaC and Cloud Native toolchains. 14. Understanding of security principles, including firewalls, VPC, IAM, and TLS/SSL for secure communication. 15. Knowledge of API Gateway, service mesh (e.g., Istio), and NGINX for API security, rate limiting, and traffic management. Experience Required Technical Architect with 7 - 12 years of experience Salary 13-17 Lpa Job Types: Full-time, Permanent Pay: ₹2,200,000.00 - ₹2,500,000.00 per year Experience: total work: 1 year (Preferred) Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Hyderābād
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Assistant Vice President– Generative AI – Systems Architect Role Overview: We are looking for an experienced Systems Architect with extensive experience in designing and scaling Generative AI systems to production. This role requires an individual with deep expertise in system architecture, software engineering, data platforms, and AI infrastructure, who can bridge the gap between data science, engineering and business. You will be responsible for end-to-end architecture of Gen.AI systems including model lifecycle management, inference, orchestration, pipelines Key Responsibilities: Architect and design end-to-end systems for production-grade Generative AI applications (e.g., LLM-based chatbots, copilots, content generation tools). Define and oversee system architecture covering data ingestion, model training/fine-tuning, inferencing, and deployment pipelines. Establish architectural tenets like modularity, scalability, reliability, observability, and maintainability. Collaborate with data scientists, ML engineers, platform engineers, and product managers to align architecture with business and AI goals. Choose and integrate foundation models (open source or proprietary) using APIs, model hubs, or fine-tuned versions. Evaluate and design solutions based on architecture patterns such as Retrieval-Augmented Generation (RAG), Agentic AI, Multi-modal AI, and Federated Learning. Design secure and compliant architecture for enterprise settings, including data governance, auditability, and access control. Lead system design reviews and define non-functional requirements (NFRs), including latency, availability, throughput, and cost. Work closely with MLOps teams to define the CI /CD processes for model and system updates. Contribute to the creation of reference architectures, design templates, and reusable components. Stay abreast of the latest advancements in GenAI , system design patterns, and AI platform tooling. Qualifications we seek in you! Minimum Qualifications Proven experience designing and implementing distributed systems, cloud-native architectures, and microservices. Deep understanding of Generative AI architectures, including LLMs, diffusion models, prompt engineering, and model fine-tuning. Strong experience with at least one cloud platform (AWS, GCP, or Azure) and services like SageMaker, Vertex AI, or Azure ML. Experience with Agentic AI systems or orchestrating multiple LLM agents. Experience with multimodal systems (e.g., combining image, text, video, and speech models). Knowledge of semantic search, vector databases, and retrieval techniques in RAG. Familiarity with Zero Trust architecture and advanced enterprise security practices. Experience in building developer platforms/toolkits for AI consumption. Contributions to open-source AI system frameworks or thought leadership in GenAI architecture. Hands-on experience with tools and frameworks like LangChain , Hugging Face, Ray, Kubeflow, MLflow , or Weaviate /FAISS. Knowledge of data pipelines, ETL/ELT, and data lakes/warehouses (e.g., Snowflake, BigQuery , Delta Lake). Solid grasp of DevOps and MLOps principles, including containerization (Docker), orchestration (Kubernetes), CI/CD pipelines, and model monitoring. Familiarity with system design tradeoffs in latency vs cost vs scale for GenAI workloads. Preferred Qualifications: Bachelor’s or Master’s degree in computer science, Engineering, or related field. Experience in software/system architecture, with experience in GenAI /AI/ML. Proven experience designing and implementing distributed systems, cloud-native architectures, and microservices. Strong interpersonal and communication skills; ability to collaborate and present to technical and executive stakeholders. Certifications in cloud platforms (e.g., AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect Expert, Google Cloud Professional Data Engineer). Familiarity with data governance and security best practices. Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Assistant Vice President Primary Location India-Hyderabad Schedule Full-time Education Level Master's / Equivalent Job Posting Jul 22, 2025, 12:35:15 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 1 week ago
7.0 years
24 Lacs
Bharūch
On-site
Role: Sr Data Scientist – Digital & Analytics Experience: 7+ Years | Industry: Exposure to manufacturing, energy, supply chain or similar Location: On-Site @ Bharuch, Gujarat (6 days/week, Mon-Sat working) Perks: Work with Client Directly & Monthly renumeration for lodging Mandatory Skills: Exp. In full scale implementation from requirement gathering till project delivery (end to end). EDA, ML Techniques (supervised and unsupervised), Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), cloud ML tooling (Azure ML, AWS Sage maker, etc.), plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data, optimization models (LP, MILP, MINLP). We are seeking a highly capable and hands-on Sr Data Scientist to drive data science solution development for chemicals manufacturing environment. This role is ideal for someone with a strong product mindset and a proven ability to work independently, while mentoring a small team. You will play a pivotal role in developing advanced analytics and AI/ML solutions for operations, production, quality, energy optimization, and asset performance, delivering tangible business impact. Responsibilities: 1. Data Science Solution Development • Design and develop predictive and prescriptive models for manufacturing challenges such as process optimization, yield prediction, quality forecasting, downtime prevention, and energy usage minimization. • Perform robust exploratory data analysis (EDA) and apply advanced statistical and machine learning techniques (supervised and unsupervised). • Translate physical and chemical process knowledge into mathematical features or constraints in models. • Deploy models into production environments (on-prem or cloud) with high robustness and monitoring. 2. Team Leadership & Management • Lead a compact data science pod (2-3 members), assigning responsibilities, reviewing work, and mentoring junior data scientists or interns. • Own the entire data science lifecycle: problem framing, model development, and validation, deployment, monitoring, and retraining protocols. 3. Stakeholder Engagement & Collaboration • Work directly with Process Engineers, Plant Operators, DCS system owners, and Business Heads to identify pain points and convert them into use-cases. • Collaborate with Data Engineers and IT to ensure data pipelines and model interfaces are robust, secure, and scalable. • Act as a translator between manufacturing business units and technical teams to ensure alignment and impact. 4. Solution Ownership & Documentation • Independently manage and maintain use-cases through versioned model management, robust documentation, and logging. • Define and monitor model KPIs (e.g., drift, accuracy, business impact) post-deployment and lead remediation efforts. Required Skills: 1. 7+ years of experience in Data Science roles, with a strong portfolio of deployed use-cases in manufacturing, energy, or process industries. 2. Proven track record of end-to-end model delivery (from data prep to business value realization). 3. Master’s or PhD in Data Science, Computer Science Engineering, Applied Mathematics, Chemical Engineering, Mechanical Engineering, or a related quantitative discipline. 4. Expertise in Python (Pandas, Scikit-learn, Pyomo, XGBoost, etc.), and experience with cloud ML tooling (Azure ML, AWS Sagemaker, etc.). 5. Familiarity with plant control systems (DCS, SCADA, OPC UA), historian databases (PI, Aspen IP.21), and time-series data. 6. Experience in developing optimization models (LP, MILP, MINLP) for process or resource allocation problems is a strong plus. Job Types: Full-time, Contractual / Temporary Contract length: 6-12 months Pay: Up to ₹200,000.00 per month Work Location: In person
Posted 1 week ago
3.0 - 6.0 years
4 Lacs
India
On-site
About MostEdge MostEdge empowers retailers with smart, trusted, and sustainable solutions to run their stores more efficiently. Through our Inventory Management Service, powered by the StockUPC app , we provide accurate, real-time insights that help stores track inventory, prevent shrink, and make smarter buying decisions. Our mission is to deliver trusted, profitable experiences—empowering retailers, partners and employees to accelerate commerce in a sustainable manner. Job Summary: We are seeking a highly skilled and motivated AI/ML Engineer with a specialization in Computer Vision & Un-Supervised Learning to join our growing team. You will be responsible for building, optimizing, and deploying advanced video analytics solutions for smart surveillance applications , including real-time detection, facial recognition, and activity analysis. This role combines the core competencies of AI/ML modelling with the practical skills required to deploy and scale models in real-world production environments , both in the cloud and on edge devices . Key Responsibilities: AI/ML Development & Computer Vision Design, train, and evaluate models for: Face detection and recognition Object/person detection and tracking Intrusion and anomaly detection Human activity or pose recognition/estimation Work with models such as YOLOv8, DeepSORT, RetinaNet, Faster-RCNN, and InsightFace. Perform data preprocessing, augmentation, and annotation using tools like LabelImg, CVAT, or custom pipelines. Surveillance System Integration Integrate computer vision models with live CCTV/RTSP streams for real-time analytics. Develop components for motion detection , zone-based event alerts , person re-identification , and multi-camera coordination . Optimize solutions for low-latency inference on edge devices (Jetson Nano, Xavier, Intel Movidius, Coral TPU). Model Optimization & Deployment Convert and optimize trained models using ONNX , TensorRT , or OpenVINO for real-time inference. Build and deploy APIs using FastAPI , Flask , or TorchServe . Package applications using Docker and orchestrate deployments with Kubernetes . Automate model deployment workflows using CI/CD pipelines (GitHub Actions, Jenkins). Monitor model performance in production using Prometheus , Grafana , and log management tools. Manage model versioning, rollback strategies, and experiment tracking using MLflow or DVC . As an AI/ML Engineer, you should be well-versed of AI agent development and finetuning experience Collaboration & Documentation Work closely with backend developers, hardware engineers, and DevOps teams. Maintain clear documentation of ML pipelines, training results, and deployment practices. Stay current with emerging research and innovations in AI vision and MLOps. Required Qualifications: Bachelor’s or master’s degree in computer science, Artificial Intelligence, Data Science, or a related field. 3–6 years of experience in AI/ML, with a strong portfolio in computer vision, Machine Learning . Hands-on experience with: Deep learning frameworks: PyTorch, TensorFlow Image/video processing: OpenCV, NumPy Detection and tracking frameworks: YOLOv8, DeepSORT, RetinaNet Solid understanding of deep learning architectures (CNNs, Transformers, Siamese Networks). Proven experience with real-time model deployment on cloud or edge environments. Strong Python programming skills and familiarity with Git, REST APIs, and DevOps tools. Preferred Qualifications: Experience with multi-camera synchronization and NVR/DVR systems. Familiarity with ONVIF protocols and camera SDKs. Experience deploying AI models on Jetson Nano/Xavier , Intel NCS2 , or Coral Edge TPU . Background in face recognition systems (e.g., InsightFace, FaceNet, Dlib). Understanding of security protocols and compliance in surveillance systems. Tools & Technologies: Languages & AI - Python, PyTorch, TensorFlow, OpenCV, NumPy, Scikit-learn Model Serving - FastAPI, Flask, TorchServe, TensorFlow Serving, REST/gRPC APIs Model Optimization - ONNX, TensorRT, OpenVINO, Pruning, Quantization Deployment - Docker, Kubernetes, Gunicorn, MLflow, DVC CI/CD & DevOps - GitHub Actions, Jenkins, GitLab CI Cloud & Edge - AWS SageMaker, Azure ML, GCP AI Platform, Jetson, Movidius, Coral TPU Monitoring - Prometheus, Grafana, ELK Stack, Sentry Annotation Tools - LabelImg, CVAT, Supervisely Benefits: Competitive compensation and performance-linked incentives. Work on cutting-edge surveillance and AI projects. Friendly and innovative work culture. Job Types: Full-time, Permanent Pay: From ₹400,000.00 per year Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Evening shift Monday to Friday Morning shift Night shift Rotational shift US shift Weekend availability Supplemental Pay: Performance bonus Quarterly bonus Work Location: In person Application Deadline: 25/07/2025 Expected Start Date: 01/08/2025
Posted 1 week ago
4.0 - 7.0 years
0 Lacs
Andhra Pradesh
On-site
Expertise in AWS services like EC2, CloudFormation, S3, IAM, ECS/EKS, EMR, QuickSight, SageMaker, Athena, Glue etc. Expertise in Hadoop platform administration and good debugging skills to resolve hive and spark related issues. Experience in designing, developing, configuring, testing and deploying cloud automation preferably in AWS Experience in infrastructure provisioning using CloudFormation, Terraform, Ansible, etc. Experience in Python and Spark. Working knowledge of CI/CD tools and containers Key Responsibilities Interpreting and analyzing business requirements and converting them into high and low level designs. Designing, developing, configuring, testing and deploying cloud automation for Finance business unit using tools such as CloudFormation, Terraform, Ansible etc. while following the capability domain’s Engineering standards in an Agile environment End-to-end ownership of developing, configuring, unit testing and deploying developed code with quality and minimal supervision. Work closely with customers, business analysts and technology & project team to understand business requirements, drive the analysis and design of quality technical solutions that are aligned with business and technology strategies and comply with the organization's architectural standards. Understand and follow-up through change management procedures to implement project deliverables. Coordinate with support groups such as Enterprise Cloud Engineering teams, DevSecOps, Monitoring to get issues resolved with a quick turnaround time. Work with data science user community to address an issue in ML(machine learning) development life cycle. Required Qualifications Bachelor’s or Master’s degree in Computer Science or similar field 4 to 7 years of experience in automation on a major cloud (AWS, Azure or GCP) Experience in infrastructure provisioning using Ansible, AWS Cloud formation or Terraform, Python or PowerShell Working knowledge of AWS Services such as EC2, Cloud Formation, IAM, S3, EMR, ECS/EKS etc. Working knowledge of CI/CD tools and containers. Experience in hadoop administration in resolving hive/spark related issues. Proven understanding of common development tools, patterns and practices for the cloud. Experience writing automated unit tests in a major programming language Proven ability to write quality code by following best practices and guidelines. Strong problem-solving, multi-tasking and organizational skills. Good written and verbal communication skills. Demonstrable experience of working on a team that is geographically dispersed. Preferred Qualifications Experience with managing Hadoop platform and good in debugging hive/spark related issues. Cloud certification (AWS, Azure or GCP) Knowledge of UNIX/LINUX shell scripting About Our Company Ameriprise India LLP has been providing client based financial solutions to help clients plan and achieve their financial objectives for 125 years. We are a U.S. based financial planning company headquartered in Minneapolis with a global presence. The firm’s focus areas include Asset Management and Advice, Retirement Planning and Insurance Protection. Be part of an inclusive, collaborative culture that rewards you for your contributions and work with other talented individuals who share your passion for doing great work. You’ll also have plenty of opportunities to make your mark at the office and a difference in your community. So if you're talented, driven and want to work for a strong ethical company that cares, take the next step and create a career at Ameriprise India LLP. Ameriprise India LLP is an equal opportunity employer. We consider all qualified applicants without regard to race, color, religion, sex, genetic information, age, sexual orientation, gender identity, disability, veteran status, marital status, family status or any other basis prohibited by law. Full-Time/Part-Time Full time Timings (2:00p-10:30p) India Business Unit AWMPO AWMP&S President's Office Job Family Group Technology
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough