Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
India
Remote
Design, provision, and document a production-grade AWS micro-service platform for a Apache-powered ERP implementation—hitting our 90-day “go-live” target while embedding DevSecOps guard-rails the team can run without you. Key Responsibilities Cloud Architecture & IaC Author Terraform modules for VPC, EKS (Graviton), RDS (MariaDB Multi-AZ), MSK, ElastiCache, S3 lifecycle, API Gateway, WAF, Route 53. Implement node pools (App, Spot Analytics, Cache, GPU) with Karpenter autoscaling. CI/CD & GitOps Set up GitHub Actions pipelines (lint, unit tests, container scan, Terraform Plan). Deploy Argo CD for Helm-based application roll-outs (ERP, Bot, Superset, etc.). DevSecOps Controls Enforce OPA Gatekeeper policies, IAM IRSA, Secrets Manager, AWS WAF rules, ECR image scanning. Build CloudWatch/X-Ray dashboards; wire alerting to Slack/email. Automation & DR Define backup plans (RDS PITR, EBS, S3 Std-IA → Glacier). Document cross-Region fail-over run-book (Route 53 health-checks). Standard Operating Procedures Draft SOPs for patching, scaling, on-call, incident triage, budget monitoring. Knowledge Transfer (KT) Run 3× 2-hour remote workshops (infra deep-dive, CI/CD hand-over, DR drill). Produce “Day-2” wiki: diagrams (Mermaid), run-books, FAQ. Required Skill Set 8+ yrs designing AWS micro-service / Kubernetes architectures (ideally EKS on Graviton). Expert in Terraform , Helm , GitHub Actions , Argo CD . Hands-on with RDS MariaDB , Kafka (MSK) , Redis , SageMaker endpoints . Proven DevSecOps background: OPA, IAM least-privilege, vulnerability scanning. Comfortable translating infra diagrams into plain-language SOPs for non-cloud staff. Nice-to-have: prior ERP deployment experience; WhatsApp Business API integration; EPC or construction IT domain knowledge. How Success Is Measured Go-live readiness — Production cluster passes load, fail-over, and security tests by Day 75. Zero critical CVEs exposed in final Trivy scan. 99 % IaC coverage — manual console changes not permitted. Team self-sufficiency — internal staff can recreate the stack from scratch using docs + KT alone. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
You will lead the development of predictive machine learning models for Revenue Cycle Management analytics, along the lines of: 1 Payer Propensity Modeling - predicting payer behavior and reimbursement likelihood 2 Claim Denials Prediction - identifying high-risk claims before submission 3 Payment Amount Prediction - forecasting expected reimbursement amounts 4 Cash Flow Forecasting - predicting revenue timing and patterns 5 Patient-Related Models - enhancing patient financial experience and outcomes 6 Claim Processing Time Prediction - optimizing workflow and resource allocation Additionally, we will work on emerging areas and integration opportunities—for example, denial prediction + appeal success probability or prior authorization prediction + approval likelihood models. You will reimagine how providers, patients, and payors interact within the healthcare ecosystem through intelligent automation and predictive insights, ensuring that providers can focus on delivering the highest quality patient care. VHT Technical Environment 1 Cloud Platform: AWS (SageMaker, S3, Redshift, EC2) 2 Development Tools: Jupyter Notebooks, Git, Docker 3 Programming: Python, SQL, R (optional) 4 ML/AI Stack: Scikit-learn, TensorFlow/PyTorch, MLflow, Airflow 5 Data Processing: Spark, Pandas, NumPy 6 Visualization: Matplotlib, Seaborn, Plotly, Tableau Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
Mohali, Punjab
On-site
Company: Chicmic Studios Job Role: Python Machine Learning & AI Developer Experience Required: 3+ Years We are looking for a highly skilled and experienced Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF) Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch , TensorFlow , and scikit-learn . Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving , TorchServe , and AWS SageMaker . Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills and Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of professional experience as a Python Developer. Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch , TensorFlow , and scikit-learn . Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. Contact : 9875952836 Office Location: F273, Phase 8b Industrial Area Mohali, Punjab. Job Type: Full-time Schedule: Day shift Monday to Friday Work Location: In person
Posted 1 month ago
3.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- Bachelor's degree in a quantitative field (e.g., Business, Economics, Statistics, Mathematics, Computer Science) or equivalent practical experience - Proficiency in Microsoft Excel and SQL skills - Experience with data analysis and creating visualizations - Knowledge of AI/ML tools and platforms (e.g., Amazon SageMaker, TensorFlow, Python libraries for data analysis) - Understanding of basic machine learning concepts and their business applications - Strong problem-solving and analytical skills - Excellent written and verbal communication skills - Ability to work effectively in a team environment At Amazon, we're working to be the most customer-centric company on Earth. To get there, we need talented, bright, and data-driven people. If you'd like to help us build the place to find and buy anything online, this is your chance to make history. Within Amazon’s Workplace Health & Safety team, ‘The Employee Safety Experience (ESE)’ team is seeking an analytical and detail-oriented candidate. This is an exciting opportunity to join a team in a huge growth area for Amazon. The vision of this team is to build an Amazon safety experience that is responsive to our employees needs and actionable by our leaders. One of the vertical of ESE is ‘The Business and Program Analysis’ team with focus on providing technical guidance and thought leadership on all ESE programs from ideation through execution. The ideal candidate will have a strong analytical mindset, attention to detail, and the ability to communicate complex information clearly and concisely. You should be comfortable working with data, have basic SQL skills, and be eager to learn and grow in a fast-paced environment. Key job responsibilities Key responsibilities include: • Provide analytics solutions to solve business problems • Extract available data and apply basic transformations using spreadsheets, SQL, and other relevant tools • Use descriptive analysis techniques to understand impact and identify drivers • Generate findings that inform team decisions by providing visibility to business performance, highlighting anomalies, and revealing new opportunities • Document your analytical approach, including metric definitions, applied logic, assumptions, and data sources • Work with technology teams to contribute towards development/ improvement of portals, dashboards and online tools including logic validation. • Share findings with your team through written and verbal communication channels • Learn and apply analytical best practices to improve team and process efficiency • Help train new peers and contribute to the development of your team 3+ years of working experience in a data analysis or business intelligence role Experience with business intelligence tools (e.g., Tableau, QuickSight) Exposure to generative AI tools and their applications in business analytics Strong attention to detail and ability to manage multiple priorities Demonstrated ability to learn quickly and adapt to new technologies and processes Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Organization Snapshot: Birdeye is the leading all-in-one Experience Marketing platform , trusted by over 100,000+ businesses worldwide to power customer acquisition, engagement, and retention through AI-driven automation and reputation intelligence. From local businesses to global enterprises, Birdeye enables brands to deliver exceptional customer experiences across every digital touchpoint. As we enter our next phase of global scale and product-led growth , AI is no longer an add-on—it’s at the very heart of our innovation strategy . Our future is being built on Large Language Models (LLMs), Generative AI, Conversational AI, and intelligent automation that can personalize and enhance every customer interaction in real time. Job Overview: Birdeye is seeking a Senior Data Scientist – NLP & Generative AI to help reimagine how businesses interact with customers at scale through production-grade, LLM-powered AI systems . If you’re passionate about building autonomous, intelligent, and conversational systems , this role offers the perfect platform to shape the next generation of agentic AI technologies. As part of our core AI/ML team, you'll design, deploy, and optimize end-to-end intelligent systems —spanning LLM fine-tuning , Conversational AI , Natural Language Understanding (NLU) , Retrieval-Augmented Generation (RAG) , and Autonomous Agent frameworks . This is a high-impact IC role ideal for technologists who thrive at the intersection of deep NLP research and scalable engineering . Key Responsibilities: LLM, GenAI & Agentic AI Systems Architect and deploy LLM-based frameworks using GPT, LLaMA, Claude, Mistral, and open-source models. Implement fine-tuning , LoRA , PEFT , instruction tuning , and prompt tuning strategies for production-grade performance. Build autonomous AI agents with tool use , short/long-term memory , planning , and multi-agent orchestration (using LangChain Agents, Semantic Kernel, Haystack, or custom frameworks). Design RAG pipelines with vector databases ( Pinecone , FAISS , Weaviate ) for domain-specific contextualization. Conversational AI & NLP Engineering Build Transformer-based Conversational AI systems for dynamic, goal-oriented dialog—leveraging orchestration tools like LangChain, Rasa, and LLMFlow. Implement NLP solutions for semantic search , NER , summarization , intent detection , text classification , and knowledge extraction . Integrate modern NLP toolkits: SpaCy, BERT/RoBERTa, GloVe, Word2Vec, NLTK , and HuggingFace Transformers . Handle multilingual NLP, contextual embeddings, and dialogue state tracking for real-time systems. Scalable AI/ML Engineering Build and serve models using Python , FastAPI , gRPC , and REST APIs . Containerize applications with Docker , deploy using Kubernetes , and orchestrate with CI/CD workflows. Ensure production-grade reliability, latency optimization, observability, and failover mechanisms. Cloud & MLOps Infrastructure Deploy on AWS SageMaker , Azure ML Studio , or Google Vertex AI , integrating with serverless and auto-scaling services. Own end-to-end MLOps pipelines : model training, versioning, monitoring, and retraining using MLflow , Kubeflow , or TFX . Cross-Functional Collaboration Partner with Product, Engineering, and Design teams to define AI-first experiences. Translate ambiguous business problems into structured ML/AI projects with measurable ROI. Contribute to roadmap planning, POCs, technical whitepapers, and architectural reviews. Technical Skillset Required Programming : Expert in Python , with strong OOP and data structure fundamentals. Frameworks : Proficient in PyTorch , TensorFlow , Hugging Face Transformers , LangChain , OpenAI/Anthropic APIs . NLP/LLM : Strong grasp of Transformer architecture , Attention mechanisms , self-supervised learning , and LLM evaluation techniques . MLOps : Skilled in CI/CD tools, FastAPI , Docker , Kubernetes , and deployment automation on AWS/Azure/GCP . Databases : Hands-on with SQL/NoSQL databases, Vector DBs , and retrieval systems. Tooling : Familiarity with Haystack , Rasa , Semantic Kernel , LangChain Agents , and memory-based orchestration for agents. Applied Research : Experience integrating recent GenAI research (AutoGPT-style agents, Toolformer, etc.) into production systems. Bonus Points Contributions to open-source NLP or LLM projects. Publications in AI/NLP/ML conferences or journals. Experience in Online Reputation Management (ORM) , martech, or CX platforms. Familiarity with reinforcement learning , multi-modal AI , or few-shot learning at scale. Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
Kondapur, Telangana, India
On-site
What You'll Do Design & build backend components of our MLOps platform in Python on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What You Know At least 3+ years of professional backend development experience with Python. Experience with web development frameworks such as Flask or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with containers (Docker) and container platforms like AWS ECS or AWS EKS Experience with unit and functional testing frameworks. Experience with public cloud platforms like AWS. Experience with CI/CD practices, tools, and frameworks. Nice to have skills Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python. Education Bachelor’s degree in Computer Science, Information Systems, Engineering, Computer Applications, or related field. Benefits In addition to competitive salaries and benefits packages, Nisum India offers its employees some unique and fun extras: Continuous Learning - Year-round training sessions are offered as part of skill enhancement certifications sponsored by the company on an as need basis. We support our team to excel in their field. Parental Medical Insurance - Nisum believes our team is the heart of our business and we want to make sure to take care of the heart of theirs. We offer opt-in parental medical insurance in addition to our medical benefits. Activities -From the Nisum Premier League's cricket tournaments to hosted Hack-a-thon, Nisum employees can participate in a variety of team building activities such as skits, dances performance in addition to festival celebrations. Free Meals - Free snacks and dinner is provided on a daily basis, in addition to subsidized lunch. Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch . Develop Infrastructure as Code (IaC) using Terraform , and automate deployments with CI/CD pipelines . Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle —from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases . Implement frameworks for bias detection, explainability , and responsible AI . Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards , and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring , ing , and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills: 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services : EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git , and CI/CD pipelines . Proven track record in delivering AI/ML projects into production environments . Deep understanding of MLOps, model versioning, monitoring , and retraining pipelines . Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills: AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain , LLMs , Kubeflow , or GCP-based AI services . Skills Enterprise Architecture,Enterprise Architect,Aws,Python Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch . Develop Infrastructure as Code (IaC) using Terraform , and automate deployments with CI/CD pipelines . Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle —from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases . Implement frameworks for bias detection, explainability , and responsible AI . Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards , and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring , ing , and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills: 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services : EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git , and CI/CD pipelines . Proven track record in delivering AI/ML projects into production environments . Deep understanding of MLOps, model versioning, monitoring , and retraining pipelines . Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills: AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain , LLMs , Kubeflow , or GCP-based AI services . Skills Enterprise Architecture,Enterprise Architect,Aws,Python Show more Show less
Posted 1 month ago
0.0 - 2.0 years
0 Lacs
Hyderabad, Telangana
On-site
General information Country India State Telangana City Hyderabad Job ID 44779 Department Development Experience Level EXECUTIVE Employment Status FULL_TIME Workplace Type On-site Description & Requirements As an Associate Machine Learning Engineer / Data Scientist, you will contribute to the advancement of research projects in artificial intelligence and machine learning. Your responsibilities will encompass areas such as large language models, image processing, and sentiment analysis. You will work collaboratively with development partners to incorporate AI research into products such as Digital Assistant and Document Capture. Essential Duties: Model Development: Assist in designing and implementing AI/ML models. Contribute to building innovative models and integrating them into existing systems. Fine-tuning Models: Support the fine-tuning of pre-trained models for specific tasks and domains. Ensure models are optimized for accuracy and efficiency. Data Clean-up: Conduct data analysis and pre-processing to ensure the quality and relevance of training datasets. Implement data cleaning techniques. Natural Language Processing (NLP): Assist in the development of NLP tasks like sentiment analysis, text classification, and language understanding. Large Language Models (LLMs): Work with state-of-the-art LLMs and explore their applications in various domains. Support continuous improvement and adaptation of LLMs. Research and Innovation: Stay updated with advancements in AI/ML, NLP, and LLMs. Experiment with new approaches to solve complex problems and improve methodologies. Deployment and Monitoring: Collaborate with DevOps teams to deploy AI/ML models. Implement monitoring mechanisms to track model performance. Documentation: Maintain clear documentation of AI/ML processes, models, and improvements to ensure knowledge sharing and collaboration. Basic Qualifications: Educational Background Programming and Tools Experience 1-2 years of total industry experience Minimum 6 months experience in ML & Data Science Skills Problem-Solving and Analytical Skills Good oral and written communication skills. Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Mathematics, Statistics or a related field. Specialization or coursework in AI, ML, Statistics & Probability, DL, Computer Vision, Signal Processing, or NLP/NLU is a plus. Proficiency in programming languages commonly used in AI and ML, such as Python or R & querying languages like SQL. Experience in Cloud computing infrastructures like AWS Sagemaker or Azure ML for implementing ML solutions is highly preferred. Experience with relevant libraries and frameworks, such as scikit-learn, Keras, TensorFlow, PyTorch, or NLTK is a plus. This role offers a great opportunity to work with cutting-edge AI/ML technologies and contribute to innovative projects in a collaborative environment. About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.
Posted 1 month ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Want to be on a team that full of results-driven individuals who are constantly seeking to innovate? Want to make an impact? At SailPoint, our Engineering team does just that. Our engineering is where high-quality professional engineering meets individual impact. Our team creates products are built on a mature, cloud-native event-driven microservices architecture hosted in AWS. SailPoint is seeking a Backend Software Engineer to help build a new cloud-based SaaS identity analytics product. We are looking for well-rounded backend or full stack engineers who are passionate about building and delivering reliable, scalable microservices and infrastructure for SaaS products. As one of the first members on the team, you will be integral in building this product and will be part of an agile team that is in startup mode. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base. Responsibilities Deliver efficient, maintainable data pipelines Deliver robust, bug free code Java based micro services Build and maintain Data Analytics and Machine Learning features Produce designs and rough estimates, and implement features based on product requirements. Collaborate with peers on designs, code reviews, and testing. Produce unit and end-to-end tests to improve code quality and maximize code coverage for new and existing features. Responsible for on-call production support Requirements 4+ years of professional software development experience Strong Python, SQL, Java experience Great communication skills BS in Computer Science, or a related field Comprehensive experience with object-oriented analysis and design skills Experience with Workflow engines Experience with Continuous Delivery, Source control Experience with Observability platforms for performance metrics collection and monitoring. Preferred Strong Experience in AirFlow, Snowflake, DBT Experience with ML Pipelines (SageMaker) Experience with Continuous Delivery Experience working on a Big Data/Machine Learning product Compensation and benefits Experience a Small-company Atmosphere with Big-company Benefits. Recharge your batteries with a flexible vacation policy and paid holidays. Grow with us with both technical and career growth opportunities. Enjoy a healthy work-life balance with flexible hours, family-friendly company events and charitable work. SailPoint is an equal opportunity employer and we welcome all qualified candidates to apply to join our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other category protected by applicable law. Alternative methods of applying for employment are available to individuals unable to submit an application through this site because of a disability. Contact hr@sailpoint.com or mail to 11120 Four Points Dr, Suite 100, Austin, TX 78726, to discuss reasonable accommodations. Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Full Stack AI Architect Exp: 10yrs+ Location: Hyderabad/Chennai Summary: We are seeking a highly skilled Full Stack AI Architect to join our team and work under the guidance of Principal Architect to develop and deploy LLM Agents & Multi-Agent Frameworks. The ideal candidate will have 10+ years of experience in software engineering, with a strong focus on AI/ML. This role requires excellent problem-solving capabilities and a strong execution mindset to drive the development and deployment of AI-powered solutions. Responsibilities Collaborate with the Principal Architect to design and implement AI agents and multi-agent frameworks. Develop and maintain robust, scalable, and maintainable microservices architectures. Ensure seamless integration of AI agents with core systems and databases. Develop APIs and SDKs for internal and external consumption. Work closely with data scientists to fine-tune and optimize LLMs for specific tasks and domains. Implement ML Ops practices, including CI/CD pipelines, model versioning, and experiment tracking1. Design and implement comprehensive monitoring and observability solutions to track model performance, identify anomalies, and ensure system stability2. Utilize containerization technologies such as Docker and Kubernetes for efficient deployment and scaling of applications3. Leverage cloud platforms such as AWS, Azure, or GCP for infrastructure and services3. Design and implement data pipelines for efficient data ingestion, transformation, and storage4. Ensure data quality and security throughout the data lifecycle5. Mentor junior engineers and foster a culture of innovation, collaboration, and continuous learning. Qualifications 10+ years of experience in software engineering with a strong focus on AI/ML. Proficiency in frontend frameworks like React, Angular, or Vue.js. Strong hands-on experience with backend technologies like Node.js, Python (with frameworks like Flask, Django, or FastAPI), or Java. Experience with cloud platforms such as AWS, Azure, or GCP. Proven ability to design and implement complex, scalable, and maintainable architectures. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Passion for continuous learning and staying up to date with the latest advancements in AI/ML. End-to-end experience with at least one full AI stack on Azure, AWS, or GCP, including components such as Azure Machine Learning, AWS SageMaker, or Google AI Platform3. Hands-on experience with agent frameworks like Autogen, AWS Agent Framework, LangGraph etc. Experience with databases such as MongoDB, PostgreSQL, or similar technologies for efficient data management and integration. Illustrative Projects You May Have Worked On Successfully led the development and deployment of an AI-powered recommendation system using AWS SageMaker, integrating it with a Node.js backend and a React frontend. Designed and implemented a real-time fraud detection system on Azure, utilizing Azure Machine Learning for model training and Kubernetes for container orchestration. Developed a chatbot using Google AI Platform, integrating it with a Django backend and deploying it on GCP, ensuring seamless interaction with MongoDB for data storage Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
Job Description: AI/ML Specialist We are looking for a highly skilled and experienced AI/ML Specialist to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF). Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn. Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker. Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills And Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of professional experience as a AI/ML Specialist Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn. Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. Skills:- Artificial Intelligence (AI), pandas, Natural Language Processing (NLP), NumPy, Machine Learning (ML), TensorFlow, PyTorch and Python Show more Show less
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Pre-Sales Engineer - Cloud (AWS) Location : Noida, India (Hybdrid) Department: Sales/Engineering Reports To: Head of Sales Company Description Forasoftware, a trusted Microsoft and AWS partner, delivers comprehensive technology solutions that empower businesses across Ireland, the UK, India, and Türkiye. Our expertise spans Microsoft Azure, Microsoft 365, business intelligence, modern work, advanced security, helping organizations modernize IT, enhance collaboration, AWS and drive innovation. Forasoftware provides secure, scalable, and compliance-ready solutions tailored to your needs, ensuring you maximize your technology investments for growth and operational efficiency. Position Overview The Pre-sales Engineer, will be the technical bridge between our Sales Teams and their pre-sales customers. We are seeking a highly skilled and motivated Pre-Sales Engineer with expertise in Amazon Web Services (AWS) to join our dynamic team. The ideal candidate will have a strong technical background, excellent communication skills, and the ability to understand and address customer needs. Knowledge of Microsoft Azure products and relevant certifications will be considered a significant advantage. Experience Rich experience in delivering highest quality presales Support and Solution by bringing unique value on to the table for customers Strong understand and knowledge on AWS : Amazon EC2, AWS Lambda, Amazon Elastic Kubernetes Service (EKS) AWS : Amazon S3, Amazon EFS, Amazon Elastic Block Store (EBS) AWS : Amazon RDS, Amazon DynamoDB, Amazon Aurora AWS : Amazon VPC, Elastic Load Balancing (ELB), AWS Transit Gateway AWS : Amazon SageMaker, AWS AI Services (e.g., Amazon Rekognition, Amazon Lex) AWS : Amazon Redshift, Amazon Kinesis, AWS Glue AWS : Amazon CloudWatch, AWS Trusted Advisor, AWS Systems Manager Technical Expertise : Provide in-depth technical knowledge and support for AWS services, including but not limited to EC2, S3, RDS, and Lambda. Customer Engagement : Collaborate with the sales team to understand customer requirements and develop tailored solutions that address their needs. Solution Design : Design and present AWS-based solutions to customers, ensuring they meet both technical and business requirements. Demonstrations and POCs : Conduct product demonstrations and proof-of-concepts (POCs) to showcase the capabilities and benefits of AWS solutions. Documentation : Create and maintain technical documentation, including solution architectures, proposals, and presentations. Training and Enablement : Provide training and enablement sessions for customers and internal teams on AWS products and solutions. Competitive Analysis : Stay updated on industry trends, competitor products, and emerging technologies to provide insights and recommendations. Qualifications: Education : Bachelor's degree in Computer Science, Information Technology, or a related field. Experience : Minimum of 3-5 years of experience in a pre-sales or technical consulting role, with a focus on AWS. Certifications : AWS Certified Solutions Architect, AWS Certified DevOps Engineer, or other relevant AWS certifications. Bonus Points : Knowledge of Microsoft Azure products and certifications such as Azure Solutions Architect Expert or Azure DevOps Engineer Expert. Technical Skills : Proficiency in cloud architecture, networking, security, and automation. Experience with scripting languages such as Python or PowerShell is a plus. Soft Skills : Excellent communication, presentation, and interpersonal skills. Ability to work collaboratively in a team environment and manage multiple priorities. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Company Description Rayblaze Global Private Limited is a technology-driven solution provider that empowers businesses to streamline operations, enhance customer experiences, and achieve digital transformation. With a focus on tailored IT solutions since 2016, we offer web and mobile app development, custom CMS, e-commerce platforms, ERP systems, IoT solutions, SEO, and digital marketing. Our approach is guided by understanding our client's business goals, challenges, and budget to deliver impactful solutions across industries. Role Description This is a full-time on-site role for a Machine Learning Engineer located in Trivandrum. The Machine Learning Engineer will be responsible for developing algorithms, implementing neural networks, applying statistics, and utilizing pattern recognition techniques. The role will involve working on projects that optimize and innovate various business processes using machine learning technologies. Qualifications Skills in Pattern Recognition and Neural Networks Proficiency in Computer Science and Algorithms Knowledge of Statistics and its application in machine learning Strong analytical and problem-solving skills Experience in developing machine learning models and algorithms Ability to work collaboratively in a team environment Master's degree in Computer Science, Statistics, or related field Job description * Collect, clean, and prepare data. * Build and optimize machine learning models. * Use tools like TensorFlow and Scikit-learn. * Conduct data analysis and statistical assessments. * Deploy models and ensure performance. * Proficient in Python, SQL, and machine learning concepts. * Experience with cloud platforms and large datasets. * Familiarity with AWS SageMaker (preferred, not mandatory). Show more Show less
Posted 1 month ago
7.0 years
3 - 9 Lacs
Bengaluru
On-site
Bangalore,Karnataka,India Job ID 766481 Join our Team About this Opportunity The complexity of running and optimizing the next generation of wireless networks, such as 5G with distributed edge compute, will require Machine Learning (ML) and Artificial Intelligence (AI) technologies. Ericsson is setting up an AI Accelerator Hub in India to fast-track our strategy execution, using Machine Intelligence (MI) to drive thought leadership, automate, and transform Ericsson’s offerings and operations. We collaborate with academia and industry to develop state-of-the-art solutions that simplify and automate processes, creating new value through data insights. What you will do As a Senior Data Scientist, you will apply your knowledge of data science and ML tools backed with strong programming skills to solve real-world problems. Responsibilities: 1. Lead AI/ML features/capabilities in product/business areas 2. Define business metrics of success for AI/ML projects and translate them into model metrics 3. Lead end-to-end development and deployment of Generative AI solutions for enterprise use cases 4. Design and implement architectures for vector search, embedding models, and RAG systems 5. Fine-tune and evaluate large language models (LLMs) for domain-specific tasks 6. Collaborate with stakeholders to translate vague problems into concrete Generative AI use cases 7. Develop and deploy generative AI solutions using AWS services such as SageMaker, Bedrock, and other AWS AI tools. Provide technical expertise and guidance on implementing GenAI models and best practices within the AWS ecosystem. 8. Develop secure, scalable, and production-grade AI pipelines 9. Ensure ethical and responsible AI practices 10. Mentor junior team members in GenAI frameworks and best practices 11. Stay current with research and industry trends in Generative AI and apply cutting-edge techniques 12. Contribute to internal AI governance, tooling frameworks, and reusable components 13. Work with large datasets including petabytes of 4G/5G networks and IoT data 14. Propose/select/test predictive models and other ML systems 15. Define visualization and dashboarding requirements with business stakeholders 16. Build proof-of-concepts for business opportunities using AI/ML 17. Lead functional and technical analysis to define AI/ML-driven business opportunities 18. Work with multiple data sources and apply the right feature engineering to AI models 19. Lead studies and creative usage of new/existing data sources What you will bring Required Experience - min 7 years 1. Bachelors/Masters/Ph.D. in Computer Science, Data Science, AI, ML, Electrical Engineering, or related disciplines from reputed institutes 2. 3+ years of applied ML/AI production-level experience 3. Strong programming skills (R/Python) 4. Proven ability to lead AI/ML projects end-to-end 5. Strong grounding in mathematics, probability, and statistics 6. Hands-on experience with data analysis, visualization techniques, and ML frameworks (Python, R, H2O, Keras, TensorFlow, Spark ML) 7. Experience with semi-structured/unstructured data for AI/ML models 8. Strong understanding of building AI models using Deep Neural Networks 9. Experience with Big Data technologies (Hadoop, Cassandra) 10. Ability to source and combine data from multiple sources for ML models Preferred Qualifications: 1. Good communication skills in English 2. Certifying MI MOOCs, a plus 3. Domain knowledge in Telecommunication/IoT, a plus 4. Experience with data visualization and dashboard creation, a plus 5. Knowledge of Cognitive models, a plus 6. Experience in partnering and collaborative co-creation in a global matrix organization. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
Remote
Software Engineer - Privacy and AI Governance Role purpose As a Senior software Engineer (Privacy and AI governance) you will help design, develop and operationalize the technical foundations of Accelya’s Privacy by Design and AI governance programs. You will report directly to Accelya’s Data Protection Officer/AI Governance lead supporting the DPO enforce data protection, AI accountability and privacy by design principles. This role will also have a dotted line to Accelya’s Chief Technology Officers to ensure alignment with technical standards, priorities and devops workflows. This individual will develop effective relationships within the technology division working closely with the technology leadership and being able to embed controls effectively without slowing innovation. As Senior software Engineer (Privacy and AI governance) you will spearhead efforts to ensure that privacy is embedded into our software products’ DNA and that Accelya’s AI initiatives meet the requirements of applicable regulations (e.g., GDPR, EU AI act), AI best practice (such as NIST AI RMF), Accelya trust principles, and customer expectations. Duties & Responsibilities Use automated data discovery tools to identify personal or sensitive data flows Design and develop internal tools, APIs, and automation scripts to support: Data privacy workflows (e.g. DSARs, data lineage, consent management) AI/ML governance frameworks (e.g. model cards, audit logging, explainability checks) Review technical controls for data minimization, purpose limitation, access control, and retention policies. Build integrations with privacy and cloud compliance platforms (e.g. OneTrust, AWS Macie and SageMaker governance tools). Collaborate with the AI/ML teams to establish responsible AI development patterns, including bias detection, transparency, and model lifecycle governance. Contribute to privacy impact assessments (PIAs) and AI risk assessments by providing technical insights. Create dashboards and monitoring systems to flag potential policy or governance violations in pipelines. Support the DPO with technical implementation of GDPR, CCPA, and other data protection regulations. Collaborate with legal, privacy, and engineering teams to prioritize risks and translate findings into clear, actionable remediation plans. Knowledge, Experience & Skills: Must-Haves: Proven software engineering experience, ideally in backend or systems engineering roles Strong programming skills (e.g. Python, Java, or TypeScript) Familiarity with data privacy and protection concepts (e.g., pseudonymization, access logging, encryption) Understanding of AI/ML lifecycle Experience working with cloud environments (especially AWS) Ability to translate legal/policy requirements into technical designs Nice-to-Haves: Experience with privacy or GRC tools (e.g., OneTrust, or BigID) Knowledge of machine learning fairness, explainability, and AI risk frameworks Exposure to data governance frameworks (e.g., NIST AI RMF, ISO/IEC 42001) Prior work with privacy-enhancing technologies (PETs), e.g., differential privacy or federated learning What do we offer? Open culture and challenging opportunity to satisfy intellectual needs Flexible working hours Smart working: hybrid remote/office working environment Work-life balance Excellent, dynamic and multicultural environment About Accelya Accelya is a leading global software provider to the airline industry, powering 200+ airlines with an open, modular software platform that enables innovative airlines to drive growth, delight their customers and take control of their retailing. Owned by Vista Equity Partners long-term perennial fund and with 2K+ employees based around 10 global offices, Accelya are trusted by industry leaders to deliver now and deliver for the future. The company´s passenger, cargo, and industry platforms support airline retailing from offer to settlement, both above and below the wing. Accelya are proud to deliver leading-edge technologies to our customers including through our partnership with AWS and through the pioneering NDC expertise of our Global Product teams. We are proud to enable innovation-led growth for the airline industry and put control back in the hands of airlines. For more information, please visit www.accelya.com Show more Show less
Posted 1 month ago
4.5 - 6.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Roles & Responsibilities Have hands on experience on real time ML Models / Projects Coding in Python Language, Machine Learning, Basic SQL, Git, MS Excel Experience in using IDE like Jupyter Notebook, Spyder, PyCharm Hands on with AWS Services like S3 bucket, EC2, Sagemaker, Step Functions. Engage with clients/consultants to understand requirements Taking ownership of delivering ML models with high precision outcomes. Accountable for high quality and timely completion of specified work deliverables Write codes that are well detailed structured and compute efficient Experience 4.5-6 Years Skills Primary Skill: AI/ML Development Sub Skill(s): AI/ML Development Additional Skill(s): AI/ML Development, TensorFlow, NLP, Pytorch About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru. Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less
Posted 1 month ago
3.0 - 4.0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Job Title : Data Scientist - Computer Vision & Generative AI. Location : Mumbai. Experience Level : 3 to 4 years. Employment Type : Full-time. Industry : Renewable Energy / Solar Services. Job Overview We are seeking a talented and motivated Data Scientist with a strong focus on computer vision, generative AI, and machine learning to join our growing team in the solar services sector. You will play a pivotal role in building AI-driven solutions that transform how solar infrastructure is analyzed, monitored, and optimized using image-based intelligence. From drone and satellite imagery to on-ground inspection photos, your work will enable intelligent automation, predictive analytics, and visual understanding in critical areas like fault detection, panel degradation, site monitoring, and more. If you're passionate about working at the cutting edge of AI for real-world sustainability impact, we'd love to hear from you. Key Responsibilities Design, develop, and deploy computer vision models for tasks such as object detection, classification, segmentation, anomaly detection, etc. Work with generative AI techniques (e.g. , GANs, diffusion models) to simulate environmental conditions, enhance datasets, or create synthetic training data. Build ML pipelines for end-to-end model training, validation, and deployment using Python and modern ML frameworks. Analyze drone, satellite, and on-site images to extract meaningful insights for solar panel performance, wear-and-tear detection, and layout optimization. Collaborate with cross-functional teams (engineering, field ops, product) to understand business needs and translate them into scalable AI solutions. Continuously experiment with the latest models, frameworks, and techniques to improve model performance and robustness. Optimize image pipelines for performance, scalability, and edge/cloud deployment. Key Requirements 3-4 years of hands-on experience in data science, with a strong portfolio of computer vision and ML projects. Proven expertise in Python and common data science libraries : NumPy, Pandas, Scikit-learn, etc. Proficiency with image-based AI frameworks : OpenCV, PyTorch or TensorFlow, Detectron2, YOLOv5/v8, MMDetection, etc. Experience with generative AI models like GANs, Stable Diffusion, or ControlNet for image generation or augmentation. Experience building and deploying ML models using MLflow, TorchServe, or TensorFlow Serving. Familiarity with image annotation tools (e.g. , CVAT, Labelbox), and data versioning tools (e.g. , DVC). Experience with cloud platforms (AWS, GCP, or Azure) for storage, training, or model deployment. Experience with Docker, Git, and CI/CD pipelines for reproducible ML workflows. Ability to write clean, modular code and a solid understanding of software engineering best practices in AI/ML projects. Strong problem-solving skills, curiosity, and ability to work independently in a fast-paced environment. Bonus / Preferred Skills Experience with remote sensing and working with satellite or drone imagery. Exposure to MLOps practices and tools like Kubeflow, Airflow, or SageMaker Pipelines. Knowledge of solar technologies, photovoltaic systems, or renewable energy is a plus. Familiarity with edge computing for vision applications on IoT devices or drones. (ref:hirist.tech) Show more Show less
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position Title : AI/ML Engineer. Company : Cyfuture India Pvt.Ltd. Industry : IT Services and IT Consulting. Location : Sector 81, NSEZ, Noida (5 Days Work From Office). About Cyfuture Cyfuture is a trusted name in IT services and cloud infrastructure, offering state-of-the-art data center solutions and managed services across platforms like AWS, Azure, and VMWare. We are expanding rapidly in system integration and managed services, building strong alliances with global OEMs like VMWare, AWS, Azure, HP, Dell, Lenovo, and Palo Alto. Position Overview We are hiring an experienced AI/ML Engineer to lead and shape our AI/ML initiatives. The ideal candidate will have hands-on experience in machine learning and artificial intelligence, with strong leadership capabilities and a passion for delivering production-ready solutions. This role involves end-to-end ownership of AI/ML projects, from strategy development to deployment and optimization of large-scale systems. Key Responsibilities Lead and mentor a high-performing AI/ML team. Design and execute AI/ML strategies aligned with business goals. Collaborate with product and engineering teams to identify impactful AI opportunities. Build, train, fine-tune, and deploy ML models in production environments. Manage operations of LLMs and other AI models using modern cloud and MLOps tools. Implement scalable and automated ML pipelines (e., with Kubeflow or MLRun). Handle containerization and orchestration using Docker and Kubernetes. Optimize GPU/TPU resources for training and inference tasks. Develop efficient RAG pipelines with low latency and high retrieval accuracy. Automate CI/CD workflows for continuous integration and delivery of ML systems. Key Skills & Expertise Cloud Computing & Deployment : Proficiency in AWS, Google Cloud, or Azure for scalable model deployment. Familiarity with cloud-native services like AWS SageMaker, Google Vertex AI, or Azure ML. Expertise in Docker and Kubernetes for containerized deployments. Experience with Infrastructure as Code (IaC) using tools like Terraform or CloudFormation. Machine Learning & Deep Learning Strong command of frameworks : TensorFlow, PyTorch, Scikit-learn, XGBoost. Experience with MLOps tools for integration, monitoring, and automation. Expertise in pre-trained models, transfer learning, and designing custom architectures. Programming & Software Engineering Strong skills in Python (NumPy, Pandas, Matplotlib, SciPy) for ML development. Backend/API development with FastAPI, Flask, or Django. Database handling with SQL and NoSQL (PostgreSQL, MongoDB, BigQuery). Familiarity with CI/CD pipelines (GitHub Actions, Jenkins). Scalable AI Systems Proven ability to build AI-driven applications at scale. Handle large datasets, high-throughput requests, and real-time inference. Knowledge of distributed computing : Apache Spark, Dask, Ray. Model Monitoring & Optimization Hands-on with model compression, quantization, and pruning. A/B testing and performance tracking in production. Knowledge of model retraining pipelines for continuous learning. Resource Optimization Efficient use of compute resources : GPUs, TPUs, CPUs. Experience with serverless architectures to reduce cost. Auto-scaling and load balancing for high-traffic systems. Problem-Solving & Collaboration Translate complex ML models into user-friendly applications. Work effectively with data scientists, engineers, and product teams. Write clear technical documentation and architecture reports. (ref:hirist.tech) Show more Show less
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description DATA SCIENTIST Job Summary This Data Scientist creates and implements advanced analytics models and solutions to yield predictive and prescriptive insights from large volumes of structured and unstructured data. This position works with a team responsible for the research and implementation of predictive requirements by leveraging industry standard machine learning and data visualization tools to draw insights that empower confident decisions and product creation. This position leverages emerging tools and technologies available in On-prem and Cloud environments. This position utilizes industry standard machine learning and data visualization tools to transform data and analytics requirements into predictive solutions and provide data literacy on a range of machine learning systems at UPS. This position identifies opportunities for driving descriptive to predictive and prescriptive solutions, which become inputs to department and project teams on their decisions supporting projects. Responsibilities Defines key data sources from UPS and external sources to deliver models. Develops and implements pipelines that facilitates data cleansing, data transformations, data enrichments from multiple sources (internal and external) that serve as inputs for data and analytics systems. For larger teams, works with data engineering teams to validate and test data and model pipelines identified during proof of concepts. Develops data design based on the exploratory analysis of large amounts of data to discover trends and patterns that meet stated business needs. Defines model key performance indicator (KPI) expectations and validation, testing, and re-training of existing models to meet business objectives. Reviews and creates repeatable solutions through written project documentation, process flowcharts, logs, and commented clean code to produce datasets that can be used in analytics and/or predictive modeling. Synthesizes insights and documents findings through clear and concise presentations and reports to stakeholders. Presents operationalized analytic findings and provides recommendations. Incorporates best practices on the use of statistical modeling, machine learning algorithms, distributed computing, cloud-based AI technologies, and run time performance tuning with the goal of deployment and market introduction. Leverages emerging tools and technologies together with the use of open-source or vendor products in the creation and delivery of insights that support predictive and prescriptive solutions. Qualifications Expertise in R, SQL, Python and/or any other high-level languages. Exploratory data analysis (EDA), data engineering and development of advanced analytics models. Experience in development of AI and ML using platforms like VertexAI, Databricks or Sagemaker, and familiarity with available frameworks like PyTorch, Tensorflow and Keras. Experience applying models from small to medium scaled problems. Strong analytical skills and attention to detail. Able to engage key business and executive-level stakeholders to translate business problems to a high-level analytics solution approach. Expertise with statistical techniques, machine learning, and/or operations research and their application in business. Deep understanding of data management pipelines and experience in launching moderate scale advanced analytics projects in production. Demonstrated experience in Cloud-AI technologies and knowledge of environments both in Linux/Unix and Windows. Experience implementing open-source technologies and cloud services; with or without the use of enterprise data science platforms. Core AI / Machine Learning knowledge and application in supervised and unsupervised learning domains. Familiarity with Java or C++ is a plus. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audiences. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Last Day Posted - 2/25/2024 Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste DATA SCIENTIST Job Summary This Data Scientist creates and implements advanced analytics models and solutions to yield predictive and prescriptive insights from large volumes of structured and unstructured data. This position works with a team responsible for the research and implementation of predictive requirements by leveraging industry standard machine learning and data visualization tools to draw insights that empower confident decisions and product creation. This position leverages emerging tools and technologies available in On-prem and Cloud environments. This position utilizes industry standard machine learning and data visualization tools to transform data and analytics requirements into predictive solutions and provide data literacy on a range of machine learning systems at UPS. This position identifies opportunities for driving descriptive to predictive and prescriptive solutions, which become inputs to department and project teams on their decisions supporting projects. Responsibilities Defines key data sources from UPS and external sources to deliver models. Develops and implements pipelines that facilitates data cleansing, data transformations, data enrichments from multiple sources (internal and external) that serve as inputs for data and analytics systems. For larger teams, works with data engineering teams to validate and test data and model pipelines identified during proof of concepts. Develops data design based on the exploratory analysis of large amounts of data to discover trends and patterns that meet stated business needs. Defines model key performance indicator (KPI) expectations and validation, testing, and re-training of existing models to meet business objectives. Reviews and creates repeatable solutions through written project documentation, process flowcharts, logs, and commented clean code to produce datasets that can be used in analytics and/or predictive modeling. Synthesizes insights and documents findings through clear and concise presentations and reports to stakeholders. Presents operationalized analytic findings and provides recommendations. Incorporates best practices on the use of statistical modeling, machine learning algorithms, distributed computing, cloud-based AI technologies, and run time performance tuning with the goal of deployment and market introduction. Leverages emerging tools and technologies together with the use of open-source or vendor products in the creation and delivery of insights that support predictive and prescriptive solutions. Qualifications Expertise in R, SQL, Python and/or any other high-level languages. Exploratory data analysis (EDA), data engineering and development of advanced analytics models. Experience in development of AI and ML using platforms like VertexAI, Databricks or Sagemaker, and familiarity with available frameworks like PyTorch, Tensorflow and Keras. Experience applying models from small to medium scaled problems. Strong analytical skills and attention to detail. Able to engage key business and executive-level stakeholders to translate business problems to a high-level analytics solution approach. Expertise with statistical techniques, machine learning, and/or operations research and their application in business. Deep understanding of data management pipelines and experience in launching moderate scale advanced analytics projects in production. Demonstrated experience in Cloud-AI technologies and knowledge of environments both in Linux/Unix and Windows. Experience implementing open-source technologies and cloud services; with or without the use of enterprise data science platforms. Core AI / Machine Learning knowledge and application in supervised and unsupervised learning domains. Familiarity with Java or C++ is a plus. Solid oral and written communication skills, especially around analytical concepts and methods. Ability to communicate data through a story framework to convey data-driven results to technical and non-technical audiences. Master’s Degree in a quantitative field of mathematics, computer science, physics, economics, engineering, statistics (operations research, quantitative social science, etc.), international equivalent, or equivalent job experience. Last Day Posted - 2/25/2024 Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join us as a Machine Learning Engineer In this role, you’ll be driving and embedding the deployment, automation, maintenance and monitoring of machine learning models and algorithms Day-to-day, you’ll make sure that models and algorithms work effectively in a production environment while promoting data literacy education with business stakeholders If you see opportunities where others see challenges, you’ll find that this solutions-driven role will be your chance to solve new problems and enjoy excellent career development What you’ll do Your daily responsibilities will include you collaborating with colleagues to design and develop advanced machine learning products which power our group for our customers. You’ll also codify and automate complex machine learning model productions, including pipeline optimisation. We’ll expect you to transform advanced data science prototypes and apply machine learning algorithms and tools. You’ll also plan, manage, and deliver larger or complex projects, involving a variety of colleagues and teams across our business. You’ll Also Be Responsible For Understanding the complex requirements and needs of business stakeholders, developing good relationships and how machine learning solutions can support our business strategy Working with colleagues to productionise machine learning models, including pipeline design and development and testing and deployment, so the original intent is carried over to production Creating frameworks to ensure robust monitoring of machine learning models within a production environment, making sure they deliver quality and performance Understanding and addressing any shortfalls, for instance, through retraining Leading direct reports and wider teams in an Agile way within multi-disciplinary data and analytics teams to achieve agreed project and Scrum outcomes The skills you’ll need To be successful in this role, you’ll need to have a good academic background in a STEM discipline, such as Mathematics, Physics, Engineering or Computer Science. You’ll also have the ability to use data to solve business problems, from hypotheses through to resolution. We’ll look to you to have experience of at least twelve years with machine learning on large datasets, as well as experience building, testing, supporting, and deploying advanced machine learning models into a production environment using modern CI/CD tools, including git, TeamCity and CodeDeploy. You’ll Also Need A good understanding of machine learning approaches and algorithms such as supervised or unsupervised learning, deep learning, NLP with a strong focus on model development, deployment, and optimization Experience using Python with libraries such as NumPy, Pandas, Scikit-learn, and TensorFlow or PyTorch An understanding of PySpark for distributed data processing and manipulation with AWS (Amazon Web Services) including EC2, S3, Lambda, SageMaker, and other cloud tools. Experience with data processing frameworks such as Apache Kafka, Apache Airflow and containerization technologies such as Docker and orchestration tools such as Kubernetes Experience of building GenAI solutions to automate workflows to improve productivity and efficiency Show more Show less
Posted 1 month ago
12.0 - 18.0 years
0 Lacs
Tamil Nadu, India
Remote
Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. This position requires expertise in designing, developing, debugging, and maintaining AI-powered applications and data engineering workflows for both local and cloud environments. The role involves working on large-scale projects, optimizing AI/ML pipelines, and ensuring scalable data infrastructure. As a PMTS, you will be responsible for integrating Generative AI (GenAI) capabilities, building data pipelines for AI model training, and deploying scalable AI-powered microservices. You will collaborate with AI/ML, Data Engineering, DevOps, and Product teams to deliver impactful solutions that enhance our products and services. Additionally, it would be desirable if the candidate has experience in retrieval-augmented generation (RAG), fine-tuning pre-trained LLMs, AI model evaluation, data pipeline automation, and optimizing cloud-based AI deployments. Responsibilities AI-Powered Software Development & API Integration Develop AI-driven applications, microservices, and automation workflows using FastAPI, Flask, or Django, ensuring cloud-native deployment and performance optimization. Integrate OpenAI APIs (GPT models, Embeddings, Function Calling) and Retrieval-Augmented Generation (RAG) techniques to enhance AI-powered document retrieval, classification, and decision-making. Data Engineering & AI Model Performance Optimization Design, build, and optimize scalable data pipelines for AI/ML workflows using Pandas, PySpark, and Dask, integrating data sources such as Kafka, AWS S3, Azure Data Lake, and Snowflake. Enhance AI model inference efficiency by implementing vector retrieval using FAISS, Pinecone, or ChromaDB, and optimize API latency with tuning techniques (temperature, top-k sampling, max tokens settings). Microservices, APIs & Security Develop scalable RESTful APIs for AI models and data services, ensuring integration with internal and external systems while securing API endpoints using OAuth, JWT, and API Key Authentication. Implement AI-powered logging, observability, and monitoring to track data pipelines, model drift, and inference accuracy, ensuring compliance with AI governance and security best practices. AI & Data Engineering Collaboration Work with AI/ML, Data Engineering, and DevOps teams to optimize AI model deployments, data pipelines, and real-time/batch processing for AI-driven solutions. Engage in Agile ceremonies, backlog refinement, and collaborative problem-solving to scale AI-powered workflows in areas like fraud detection, claims processing, and intelligent automation. Cross-Functional Coordination and Communication Collaborate with Product, UX, and Compliance teams to align AI-powered features with user needs, security policies, and regulatory frameworks (HIPAA, GDPR, SOC2). Ensure seamless integration of structured and unstructured data sources (SQL, NoSQL, vector databases) to improve AI model accuracy and retrieval efficiency. Mentorship & Knowledge Sharing Mentor junior engineers on AI model integration, API development, and scalable data engineering best practices, and conduct knowledge-sharing sessions. Education & Experience Required 12-18 years of experience in software engineering or AI/ML development, preferably in AI-driven solutions. Hands-on experience with Agile development, SDLC, CI/CD pipelines, and AI model deployment lifecycles. Bachelor’s Degree or equivalent in Computer Science, Engineering, Data Science, or a related field. Proficiency in full-stack development with expertise in Python (preferred for AI), Java Experience with structured & unstructured data: SQL (PostgreSQL, MySQL, SQL Server) NoSQL (OpenSearch, Redis, Elasticsearch) Vector Databases (FAISS, Pinecone, ChromaDB) Cloud & AI Infrastructure AWS: Lambda, SageMaker, ECS, S3 Azure: Azure OpenAI, ML Studio GenAI Frameworks & Tools: OpenAI API, Hugging Face Transformers, LangChain, LlamaIndex, AutoGPT, CrewAI. Experience in LLM deployment, retrieval-augmented generation (RAG), and AI search optimization. Proficiency in AI model evaluation (BLEU, ROUGE, BERT Score, cosine similarity) and responsible AI deployment. Strong problem-solving skills, AI ethics awareness, and the ability to collaborate across AI, DevOps, and data engineering teams. Curiosity and eagerness to explore new AI models, tools, and best practices for scalable GenAI adoption. About Athenahealth Here’s our vision: To create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. What’s unique about our locations? From an historic, 19th century arsenal to a converted, landmark power plant, all of athenahealth’s offices were carefully chosen to represent our innovative spirit and promote the most positive and productive work environment for our teams. Our 10 offices across the United States and India — plus numerous remote employees — all work to modernize the healthcare experience, together. Our Company Culture Might Be Our Best Feature. We don't take ourselves too seriously. But our work? That’s another story. athenahealth develops and implements products and services that support US healthcare: It’s our chance to create healthier futures for ourselves, for our family and friends, for everyone. Our vibrant and talented employees — or athenistas, as we call ourselves — spark the innovation and passion needed to accomplish our goal. We continue to expand our workforce with amazing people who bring diverse backgrounds, experiences, and perspectives at every level, and foster an environment where every athenista feels comfortable bringing their best selves to work. Our size makes a difference, too: We are small enough that your individual contributions will stand out — but large enough to grow your career with our resources and established business stability. Giving back is integral to our culture. Our athenaGives platform strives to support food security, expand access to high-quality healthcare for all, and support STEM education to develop providers and technologists who will provide access to high-quality healthcare for all in the future. As part of the evolution of athenahealth’s Corporate Social Responsibility (CSR) program, we’ve selected nonprofit partners that align with our purpose and let us foster long-term partnerships for charitable giving, employee volunteerism, insight sharing, collaboration, and cross-team engagement. What can we do for you? Along with health and financial benefits, athenistas enjoy perks specific to each location, including commuter support, employee assistance programs, tuition assistance, employee resource groups, and collaborative workspaces — some offices even welcome dogs. In addition to our traditional benefits and perks, we sponsor events throughout the year, including book clubs, external speakers, and hackathons. And we provide athenistas with a company culture based on learning, the support of an engaged team, and an inclusive environment where all employees are valued. We also encourage a better work-life balance for athenistas with our flexibility. While we know in-office collaboration is critical to our vision, we recognize that not all work needs to be done within an office environment, full-time. With consistent communication and digital collaboration tools, athenahealth enables employees to find a balance that feels fulfilling and productive for each individual situation. Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
Greater Chennai Area
On-site
Chennai / Bangalore / Hyderabad Who We Are Tiger Analytics is a global leader in AI and analytics, helping Fortune 1000 companies solve their toughest challenges. We offer full-stack AI and analytics services & solutions to empower businesses to achieve real outcomes and value at scale. We are on a mission to push the boundaries of what AI and analytics can do to help enterprises navigate uncertainty and move forward decisively. Our purpose is to provide certainty to shape a better tomorrow. Our team of 4000+ technologists and consultants are based in the US, Canada, the UK, India, Singapore and Australia, working closely with clients across CPG, Retail, Insurance, BFS, Manufacturing, Life Sciences, and Healthcare. Many of our team leaders rank in Top 10 and 40 Under 40 lists, exemplifying our dedication to innovation and excellence. We are a Great Place to Work-Certified™ (2022-24), recognized by analyst firms such as Forrester, Gartner, HFS, Everest, ISG and others. We have been ranked among the ‘Best’ and ‘Fastest Growing’ analytics firms lists by Inc., Financial Times, Economic Times and Analytics India Magazine. Curious about the role? What your typical day would look like? We are looking for a Senior Analyst or Machine Learning Engineer who will work on a broad range of cutting-edge data analytics and machine learning problems across a variety of industries. More specifically, you will Engage with clients to understand their business context. Translate business problems and technical constraints into technical requirements for the desired analytics solution. Collaborate with a team of data scientists and engineers to embed AI and analytics into the business decision processes. What do we expect? 3+ years of experience with at least 1+ years of relevant DS experience. Proficient in a structured Python, Pyspark, Machine Learning (Experience in productionizing models) Proficient in AWS cloud technologies is mandatory Experience and good understanding with Sagemaker/Data Bricks Experience in MLOPS frameworks (e.g Mlflow or Kubeflow) Follows good software engineering practices and has an interest in building reliable and robust software. Good understanding of DS concepts and DS model lifecycle. Working knowledge of Linux or Unix environments ideally in a cloud environment. Model deployment / model monitoring experience (Preferably in Banking Domain) CI/CD pipeline creation is good to have Excellent written and verbal communication skills B.Tech from Tier-1 college / M.S or M. Tech is preferred You are important to us, let’s stay connected! Every individual comes with a different set of skills and qualities so even if you don’t tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal-opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Note: The designation will be commensurate with expertise and experience. Compensation packages are among the best in the industry. Additional Benefits: Health insurance (self & family), virtual wellness platform, and knowledge communities. Show more Show less
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France