Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position: Solution Architect Location: Chennai/ Bangalore/ Kuala Lumpur Experience: 8+ years Employment Type: Full-time Job Overview Join Moving Walls, a trailblazer in the Out-of-Home (OOH) advertising and AdTech ecosystem, as a Solution Architect. This pivotal role places you at the heart of our innovative journey, designing and implementing scalable, efficient, and transformative solutions for our award-winning platforms like LMX and MAX . With a focus on automating and enhancing media transactions, you’ll enable a seamless connection between media buyers and sellers in a rapidly evolving digital-first landscape. As a Solution Architect, you will bridge the gap between business objectives and technical execution, working in an Agile environment with POD-based execution models to ensure ownership and accountability. You will drive initiatives that revolutionize the way data and technology shape OOH advertising. Why Join Us? ● Innovative Vision: Be part of a team committed to "Creating the Future of Outernet Media", where every solution impacts global markets across Asia, ANZ, Africa, and more. ● Cutting-edge Projects: Work on features like programmatic deal automation, data-driven audience insights, and dynamic campaign management for platforms connecting billions of ad impressions. ● Collaborative Culture: Collaborate with multidisciplinary teams, including Sales, Product Management, and Engineering, to craft solutions that are customized and impactful. What You’ll Do: ● Architect scalable and innovative solutions for AdTech products, ensuring alignment with organizational goals and market needs. ● Collaborate with cross-functional teams to gather, analyze, and translate business requirements into technical designs. ● Lead the development of programmatic solutions, dynamic audience segmentation tools, and integrations for global markets. ● Enhance existing products by integrating advanced features like dynamic rate cards, bid management, and inventory mapping. ● Advocate for best practices in system design, ensuring the highest standards of security, reliability, and performance. What You Bring: ● A strong technical background with hands-on experience in cloud-based architectures, API integrations, and data analytics. ● Proven expertise in working within an Agile environment and leading POD-based teams to deliver high-impact results. ● Passion for AdTech innovation and the ability to navigate complex, fast-paced environments. ● Excellent problem-solving skills, creativity, and a customer-centric mindset. Key Responsibilities 1. Solution Design: ○ Develop end-to-end solution architectures for web, mobile, and cloud-based platforms using the specified tech stack. ○ Translate business requirements into scalable and reliable technical solutions. 2. Agile POD-Based Execution: ○ Collaborate with cross-functional POD teams (Product, Engineering, QA, and Operations) to deliver iterative and focused solutions. ○ Ensure clear ownership of deliverables within the POD, fostering accountability and streamlined execution. ○ Contribute to defining and refining the POD stages to ensure alignment with organizational goals. 3. Collaboration and Stakeholder Management: ○ Work closely with product, engineering, and business teams to define technical requirements. ○ Lead technical discussions with internal and external stakeholders. 4. Technical Expertise: ○ Provide architectural guidance and best practices for system integrations, APIs, and microservices. ○ Ensure solutions meet non-functional requirements like scalability, reliability, and security. 5. Documentation: ○ Prepare and maintain architectural documentation, including solution blueprints and workflows. ○ Create technical roadmaps and detailed design documentation. 6. Mentorship: ○ Guide and mentor engineering teams during development and deployment phases. ○ Review code and provide technical insights to improve quality and performance. 7. Innovation and Optimization: ○ Identify areas for technical improvement and drive innovation in solutions. ○ Evaluate emerging technologies to recommend the best tools and frameworks. Required Skills and Qualifications ● Bachelor’s/Master’s degree in Computer Science, Information Technology, or a related field. ● Proven experience as a Solution Architect or a similar role. ● Expertise in programming languages and frameworks: Java, Angular, Python, C++ ● Proficiency in AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, or Keras. ● Experience in deploying AI models in production, including optimizing for performance and scalability. ● Understanding of deep learning, NLP, computer vision, or generative AI techniques. ● Hands-on experience with model fine-tuning, transfer learning, and hyperparameter optimization. ● Strong knowledge of enterprise architecture frameworks (TOGAF, Zachman, etc.). ● Expertise in distributed systems, microservices, and cloud-native architectures. ● Experience in API design, data pipelines, and integration of AI services within existing systems. ● Strong knowledge of databases: MongoDB, SQL, NoSQL. ● Proficiency in working with large-scale datasets, data wrangling, and ETL pipelines. ● Hands-on experience with CI/CD pipelines for AI development. ● Version control systems like Git and experience with ML lifecycle tools such as MLflow or DVC. ● Proven track record of leading AI-driven projects from ideation to deployment. ● Hands-on experience with cloud platforms (AWS, Azure, GCP) for deploying AI solutions. ● Familiarity with Agile methodologies, especially POD-based execution models. ● Strong problem-solving skills and ability to design scalable solutions. ● Excellent communication skills to articulate technical solutions to stakeholders. Preferred Qualifications ● Experience in e-commerce, Adtech or OOH (Out-of-Home) advertising technology. ● Knowledge of tools like Jira, Confluence, and Agile frameworks like Scrum or Kanban. ● Certification in cloud technologies (e.g., AWS Solutions Architect). Tech Stack ● Programming Languages: Java, Python or C++ ● Frontend Framework: Angular ● Database Technologies: MongoDB, SQL, NoSQL ● Cloud Platform: AWS ● Familiarity with data processing tools like Pandas, NumPy, and big data frameworks (e.g., Hadoop, Spark). ● Experience with cloud platforms for AI (AWS SageMaker, Azure ML, Google Vertex AI). ● Understanding of APIs, microservices, and containerization tools like Docker and Kubernetes. Share your profile to kushpu@movingwalls.com Show more Show less
Posted 18 hours ago
0 years
0 Lacs
India
Remote
About Us Evangelist Apps is a UK-based custom software development company specializing in full-stack web and mobile app development, CRM/ERP solutions, workflow automation, and AI-powered platforms. Trusted by global brands like British Airways, Third Bridge, Hästens Beds, and Duxiana, we help clients solve complex business problems with technology. We’re now expanding into AI-driven services and are looking for our first Junior AI Developer to join the team. This is an exciting opportunity to help lay the groundwork for our AI capabilities. Role Overview As our first Junior AI Developer, you’ll work closely with our senior engineers and product teams to research, prototype, and implement AI-powered features across client solutions. You’ll contribute to machine learning models, LLM integrations, and intelligent automation systems that enhance user experiences and internal workflows. Key Responsibilities Assist in building and fine-tuning ML models for tasks like classification, clustering, or NLP Integrate AI services (e.g., OpenAI, Hugging Face, AWS, or Vertex AI) into applications Develop proof-of-concept projects and deploy lightweight models into production Preprocess datasets, annotate data, and evaluate model performance Collaborate with product, frontend, and backend teams to deliver end-to-end solutions Keep up to date with new trends in machine learning and generative AI Must-Have Skills Solid understanding of Python and popular AI/ML libraries (e.g., scikit-learn, pandas, TensorFlow, or PyTorch) Familiarity with foundational ML concepts (e.g., supervised/unsupervised learning, overfitting, model evaluation) Experience with REST APIs and working with JSON-based data Exposure to LLMs or prompt engineering is a plus Strong problem-solving attitude and eagerness to learn Good communication and documentation skills Nice-to-Haves (Good to Learn On the Job) Experience with cloud-based ML tools (AWS Sagemaker, Google Vertex AI, or Azure ML) Basic knowledge of MLOps and deployment practices Prior internship or personal projects involving AI or automation Contributions to open-source or Kaggle competitions What We Offer Mentorship from experienced engineers and a high-learning environment Opportunity to work on real-world client projects from day one Exposure to multiple industry domains including expert networks, fintech, healthtech, and e-commerce Flexible working hours and remote-friendly culture Rapid growth potential as our AI practice scales Show more Show less
Posted 18 hours ago
3.0 - 7.0 years
7 - 16 Lacs
Hyderābād
On-site
AI Specialist / Machine Learning Engineer Location: On-site (hyderabad) Department: Data Science & AI Innovation Experience Level: Mid–Senior Reports To: Director of AI / CTO Employment Type: Full-time Job Summary We are seeking a skilled and forward-thinking AI Specialist to join our advanced technology team. In this role, you will lead the design, development, and deployment of cutting-edge AI/ML solutions, including large language models (LLMs), multimodal systems, and generative AI. You will collaborate with cross-functional teams to develop intelligent systems, automate complex workflows, and unlock insights from data at scale. Key Responsibilities Design and implement machine learning models for natural language processing (NLP), computer vision, predictive analytics, and generative AI. Fine-tune and deploy LLMs using frameworks such as Hugging Face Transformers, OpenAI APIs, and Anthropic Claude. Develop Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, and vector databases (e.g., Pinecone, Weaviate, Qdrant). Productionize ML workflows using MLflow, TensorFlow Extended (TFX), or AWS SageMaker Pipelines. Integrate generative AI with business applications, including Copilot-style features, chat interfaces, and workflow automation. Collaborate with data scientists, software engineers, and product managers to build and scale AI-powered products. Monitor, evaluate, and optimize model performance, focusing on fairness, explainability (e.g., SHAP, LIME), and data/model drift. Stay informed on cutting-edge AI research (e.g., NeurIPS, ICLR, arXiv) and evaluate its applicability to business challenges. Tools & Technologies Languages & Frameworks Python, PyTorch, TensorFlow, JAX FastAPI, LangChain, LlamaIndex ML & AI Platforms OpenAI (GPT-4/4o), Anthropic Claude, Mistral, Cohere Hugging Face Hub & Transformers Google Vertex AI, AWS SageMaker, Azure ML Data & Deployment MLflow, DVC, Apache Airflow, Ray Docker, Kubernetes, RESTful APIs, GraphQL Snowflake, BigQuery, Delta Lake Vector Databases & RAG Tools Pinecone, Weaviate, Qdrant, FAISS ChromaDB, Milvus Generative & Multimodal AI DALL·E, Sora, Midjourney, Runway Whisper, CLIP, SAM (Segment Anything Model) Qualifications Bachelor’s or Master’s in Computer Science, AI, Data Science, or related discipline 3–7 years of experience in machine learning or applied AI Hands-on experience deploying ML models to production environments Familiarity with LLM prompt engineering and fine-tuning Strong analytical thinking, problem-solving ability, and communication skills Preferred Qualifications Contributions to open-source AI projects or academic publications Experience with multi-agent frameworks (e.g., AutoGPT, OpenDevin) Knowledge of synthetic data generation and augmentation techniques Job Type: Permanent Pay: ₹734,802.74 - ₹1,663,085.14 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person
Posted 18 hours ago
1.0 - 2.0 years
0 Lacs
Hyderābād
On-site
General information Country India State Telangana City Hyderabad Job ID 44779 Department Development Experience Level EXECUTIVE Employment Status FULL_TIME Workplace Type On-site Description & Requirements As an Associate Machine Learning Engineer / Data Scientist, you will contribute to the advancement of research projects in artificial intelligence and machine learning. Your responsibilities will encompass areas such as large language models, image processing, and sentiment analysis. You will work collaboratively with development partners to incorporate AI research into products such as Digital Assistant and Document Capture. Essential Duties: Model Development: Assist in designing and implementing AI/ML models. Contribute to building innovative models and integrating them into existing systems. Fine-tuning Models: Support the fine-tuning of pre-trained models for specific tasks and domains. Ensure models are optimized for accuracy and efficiency. Data Clean-up: Conduct data analysis and pre-processing to ensure the quality and relevance of training datasets. Implement data cleaning techniques. Natural Language Processing (NLP): Assist in the development of NLP tasks like sentiment analysis, text classification, and language understanding. Large Language Models (LLMs): Work with state-of-the-art LLMs and explore their applications in various domains. Support continuous improvement and adaptation of LLMs. Research and Innovation: Stay updated with advancements in AI/ML, NLP, and LLMs. Experiment with new approaches to solve complex problems and improve methodologies. Deployment and Monitoring: Collaborate with DevOps teams to deploy AI/ML models. Implement monitoring mechanisms to track model performance. Documentation: Maintain clear documentation of AI/ML processes, models, and improvements to ensure knowledge sharing and collaboration. Basic Qualifications: Educational Background Programming and Tools Experience 1-2 years of total industry experience Minimum 6 months experience in ML & Data Science Skills Problem-Solving and Analytical Skills Good oral and written communication skills. Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Mathematics, Statistics or a related field. Specialization or coursework in AI, ML, Statistics & Probability, DL, Computer Vision, Signal Processing, or NLP/NLU is a plus. Proficiency in programming languages commonly used in AI and ML, such as Python or R & querying languages like SQL. Experience in Cloud computing infrastructures like AWS Sagemaker or Azure ML for implementing ML solutions is highly preferred. Experience with relevant libraries and frameworks, such as scikit-learn, Keras, TensorFlow, PyTorch, or NLTK is a plus. This role offers a great opportunity to work with cutting-edge AI/ML technologies and contribute to innovative projects in a collaborative environment. About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.
Posted 18 hours ago
0 years
5 - 15 Lacs
Ahmedabad
On-site
Proficient in Python, Node.js (or Java), and React (preferred). Experience with AWS Services: S3, Lambda, DynamoDB, Bedrock, Textract, RDS, Fargate. Experience in LLM-based application development (LangChain, Bedrock, or OpenAI APIs). Strong in NLP and embeddings (via SageMaker or third-party APIs like Cohere, Hugging Face). Knowledge of vector databases (Pinecone, ChromaDB, OpenSearch, etc.). Familiar with containerization (Docker, ECS/Fargate). Excellent understanding of REST API design and security. Experience handling PDF/image-based document classification. Good SQL and NoSQL skills (MS SQL, MongoDB). Preferred Qualifications: AWS Certified – especially in AI/ML or Developer Associate. Job Types: Full-time, Fresher, Internship Pay: ₹554,144.65 - ₹1,500,000.00 per year Schedule: Day shift Morning shift Supplemental Pay: Performance bonus Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person
Posted 18 hours ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
With a startup spirit and 115,000 + curious and courageous minds, we have the expertise to go deep with the world’s biggest brands—and we have fun doing it! We dream in digital, dare in reality, and reinvent the ways companies work to make an impact far bigger than just our bottom line. We’re harnessing the power of technology and humanity to create meaningful transformation that moves us forward in our pursuit of a world that works better for people. Now, we’re calling upon the thinkers and doers, those with a natural curiosity and a hunger to keep learning, keep growing. People who thrive on fearlessly experimenting, seizing opportunities, and pushing boundaries to turn our vision into reality. And as you help us create a better world, we will help you build your own intellectual firepower. Welcome to the relentless pursuit of better. Inviting applications for the role of AI Senior Engineer In this role you’ll be leveraging Azure’s advanced AI capabilities or AWS Advance Ai capability, including Azure Machine Learning , Azure OpenAI, PrompFlow, Azure Cognitive Search, Azure AI Document Intelligence,AWS Sage Maker, AWS Bedrocks to deliver scalable and efficient solutions. You will also ensure seamless integration into enterprise workflows and operationalize models with robust monitoring and optimization. Responsibilities AI Orchestration - Design and manage AI Orchestration flow using tools such as: Prompt Flow, Or LangChain; Continuously evaluate and refine models to ensure optimal accuracy, latency, and robustness in production. Document AI and Data Extraction, Build AI-driven workflows for extracting structured and unstructured data fromLearning, receipts, reports, and other documents using Azure AI Document Intelligence, and Azure Cognitive Services. RAG Systems - Design and implement retrieval-augmented generation (RAG) systems using vector embeddings and LLMs for intelligent and efficient document retrieval; Optimize RAG workflows for large datasets and low-latency operations. Monitoring and Optimization - Implement advanced monitoring systems using Azure Monitor, Application Insights, and Log Analytics to track model performance and system health; Continuously evaluate and refine models and workflows to meet enterprise-grade SLAs for performance and reliability. Collaboration and Documentation - Collaborate with data engineers, software developers, and DevOps teams to deliver robust and scalable AI-driven solutions; Document best practices, workflows, and troubleshooting guides for knowledge sharing and scalability. Qualifications we seek in you Proven experience with Machine Learning, Azure OpenAI, PrompFlow, Azure Cognitive Search, Azure AI Document Intelligence, AWS Bedrock, SageMaker; Proficiency in building and optimizing RAG systems for document retrieval and comparison. Strong understanding of AI/ML concepts, including natural language processing (NLP), embeddings, model fine-tuning, and evaluation; Experience in applying machine learning algorithms and techniques to solve complex problems in real-world applications; Familiarity with state-of-the-art LLM architectures and their practical implementation in production environments; Expertise in designing and managing Prompt Flow pipelines for task-specific customization of LLM outputs. Hands-on experience in training LLMs and evaluating their performance using appropriate metrics for accuracy, latency, and robustness; Proven ability to iteratively refine models to meet specific business needs and optimize them for production environments. Knowledge of ethical AI practices and responsible AI frameworks. Experience with CI/CD pipelines using Azure DevOps or equivalent tools; Familiarity with containerized environments managed through Docker and Kubernetes. Knowledge of Azure Key Vault, Managed Identities, and Azure Active Directory (AAD) for secure authentication. Experience with PyTorch or TensorFlow. Proven track record of developing and deploying Azure-based AI solutions for large-scale, enterprise-grade environments. Strong analytical and problem-solving skills, with a results-driven approach to building scalable and secure systems. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Show more Show less
Posted 19 hours ago
5.0 years
0 Lacs
India
Remote
Hi Everyone Role : Senior Data scientist - AWS - sagemaker, MLopsMflowAWSAWS SagemakerAWS Data ZoneProgramming Language Shift - 12pm to 9 pm IST 8 hour Exp - 5+yr Position Type : Remote & Contractual JD : Primary Skills : MLopsMflowAWSAWS SagemakerAWS Data ZoneProgramming Language Secondary Skills : PythonRScalaIntegration Job Description : Shift: 12 PM-9 PM IST Mandatory - sagemaker, Mflow 5+ years of work experience in Software Engineering and MLOps Adhere to best practices for developing scalable, reliable, and secure applications Development experience on AWS, AWS Sagemaker required. AWS Data Zone experience is preferred Experience with one or more general purpose programming languages including but not limited to: Python, R, Scala, Spark Experience with production-grade development, integration, and support Candidate with good analytical mindset and smart candidate who will help us in Research in MLOps area Show more Show less
Posted 20 hours ago
0.0 years
0 Lacs
Ahmedabad, Gujarat
On-site
Proficient in Python, Node.js (or Java), and React (preferred). Experience with AWS Services: S3, Lambda, DynamoDB, Bedrock, Textract, RDS, Fargate. Experience in LLM-based application development (LangChain, Bedrock, or OpenAI APIs). Strong in NLP and embeddings (via SageMaker or third-party APIs like Cohere, Hugging Face). Knowledge of vector databases (Pinecone, ChromaDB, OpenSearch, etc.). Familiar with containerization (Docker, ECS/Fargate). Excellent understanding of REST API design and security. Experience handling PDF/image-based document classification. Good SQL and NoSQL skills (MS SQL, MongoDB). Preferred Qualifications: AWS Certified – especially in AI/ML or Developer Associate. Job Types: Full-time, Fresher, Internship Pay: ₹554,144.65 - ₹1,500,000.00 per year Schedule: Day shift Morning shift Supplemental Pay: Performance bonus Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person
Posted 20 hours ago
0 years
0 Lacs
India
Remote
Design, provision, and document a production-grade AWS micro-service platform for a Apache-powered ERP implementation—hitting our 90-day “go-live” target while embedding DevSecOps guard-rails the team can run without you. Key Responsibilities Cloud Architecture & IaC Author Terraform modules for VPC, EKS (Graviton), RDS (MariaDB Multi-AZ), MSK, ElastiCache, S3 lifecycle, API Gateway, WAF, Route 53. Implement node pools (App, Spot Analytics, Cache, GPU) with Karpenter autoscaling. CI/CD & GitOps Set up GitHub Actions pipelines (lint, unit tests, container scan, Terraform Plan). Deploy Argo CD for Helm-based application roll-outs (ERP, Bot, Superset, etc.). DevSecOps Controls Enforce OPA Gatekeeper policies, IAM IRSA, Secrets Manager, AWS WAF rules, ECR image scanning. Build CloudWatch/X-Ray dashboards; wire alerting to Slack/email. Automation & DR Define backup plans (RDS PITR, EBS, S3 Std-IA → Glacier). Document cross-Region fail-over run-book (Route 53 health-checks). Standard Operating Procedures Draft SOPs for patching, scaling, on-call, incident triage, budget monitoring. Knowledge Transfer (KT) Run 3× 2-hour remote workshops (infra deep-dive, CI/CD hand-over, DR drill). Produce “Day-2” wiki: diagrams (Mermaid), run-books, FAQ. Required Skill Set 8+ yrs designing AWS micro-service / Kubernetes architectures (ideally EKS on Graviton). Expert in Terraform , Helm , GitHub Actions , Argo CD . Hands-on with RDS MariaDB , Kafka (MSK) , Redis , SageMaker endpoints . Proven DevSecOps background: OPA, IAM least-privilege, vulnerability scanning. Comfortable translating infra diagrams into plain-language SOPs for non-cloud staff. Nice-to-have: prior ERP deployment experience; WhatsApp Business API integration; EPC or construction IT domain knowledge. How Success Is Measured Go-live readiness — Production cluster passes load, fail-over, and security tests by Day 75. Zero critical CVEs exposed in final Trivy scan. 99 % IaC coverage — manual console changes not permitted. Team self-sufficiency — internal staff can recreate the stack from scratch using docs + KT alone. Show more Show less
Posted 21 hours ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
You will lead the development of predictive machine learning models for Revenue Cycle Management analytics, along the lines of: 1 Payer Propensity Modeling - predicting payer behavior and reimbursement likelihood 2 Claim Denials Prediction - identifying high-risk claims before submission 3 Payment Amount Prediction - forecasting expected reimbursement amounts 4 Cash Flow Forecasting - predicting revenue timing and patterns 5 Patient-Related Models - enhancing patient financial experience and outcomes 6 Claim Processing Time Prediction - optimizing workflow and resource allocation Additionally, we will work on emerging areas and integration opportunities—for example, denial prediction + appeal success probability or prior authorization prediction + approval likelihood models. You will reimagine how providers, patients, and payors interact within the healthcare ecosystem through intelligent automation and predictive insights, ensuring that providers can focus on delivering the highest quality patient care. VHT Technical Environment 1 Cloud Platform: AWS (SageMaker, S3, Redshift, EC2) 2 Development Tools: Jupyter Notebooks, Git, Docker 3 Programming: Python, SQL, R (optional) 4 ML/AI Stack: Scikit-learn, TensorFlow/PyTorch, MLflow, Airflow 5 Data Processing: Spark, Pandas, NumPy 6 Visualization: Matplotlib, Seaborn, Plotly, Tableau Show more Show less
Posted 21 hours ago
3.0 years
0 Lacs
Mohali, Punjab
On-site
Company: Chicmic Studios Job Role: Python Machine Learning & AI Developer Experience Required: 3+ Years We are looking for a highly skilled and experienced Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF) Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch , TensorFlow , and scikit-learn . Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving , TorchServe , and AWS SageMaker . Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills and Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of professional experience as a Python Developer. Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch , TensorFlow , and scikit-learn . Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. Contact : 9875952836 Office Location: F273, Phase 8b Industrial Area Mohali, Punjab. Job Type: Full-time Schedule: Day shift Monday to Friday Work Location: In person
Posted 1 day ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Organization Snapshot: Birdeye is the leading all-in-one Experience Marketing platform , trusted by over 100,000+ businesses worldwide to power customer acquisition, engagement, and retention through AI-driven automation and reputation intelligence. From local businesses to global enterprises, Birdeye enables brands to deliver exceptional customer experiences across every digital touchpoint. As we enter our next phase of global scale and product-led growth , AI is no longer an add-on—it’s at the very heart of our innovation strategy . Our future is being built on Large Language Models (LLMs), Generative AI, Conversational AI, and intelligent automation that can personalize and enhance every customer interaction in real time. Job Overview: Birdeye is seeking a Senior Data Scientist – NLP & Generative AI to help reimagine how businesses interact with customers at scale through production-grade, LLM-powered AI systems . If you’re passionate about building autonomous, intelligent, and conversational systems , this role offers the perfect platform to shape the next generation of agentic AI technologies. As part of our core AI/ML team, you'll design, deploy, and optimize end-to-end intelligent systems —spanning LLM fine-tuning , Conversational AI , Natural Language Understanding (NLU) , Retrieval-Augmented Generation (RAG) , and Autonomous Agent frameworks . This is a high-impact IC role ideal for technologists who thrive at the intersection of deep NLP research and scalable engineering . Key Responsibilities: LLM, GenAI & Agentic AI Systems Architect and deploy LLM-based frameworks using GPT, LLaMA, Claude, Mistral, and open-source models. Implement fine-tuning , LoRA , PEFT , instruction tuning , and prompt tuning strategies for production-grade performance. Build autonomous AI agents with tool use , short/long-term memory , planning , and multi-agent orchestration (using LangChain Agents, Semantic Kernel, Haystack, or custom frameworks). Design RAG pipelines with vector databases ( Pinecone , FAISS , Weaviate ) for domain-specific contextualization. Conversational AI & NLP Engineering Build Transformer-based Conversational AI systems for dynamic, goal-oriented dialog—leveraging orchestration tools like LangChain, Rasa, and LLMFlow. Implement NLP solutions for semantic search , NER , summarization , intent detection , text classification , and knowledge extraction . Integrate modern NLP toolkits: SpaCy, BERT/RoBERTa, GloVe, Word2Vec, NLTK , and HuggingFace Transformers . Handle multilingual NLP, contextual embeddings, and dialogue state tracking for real-time systems. Scalable AI/ML Engineering Build and serve models using Python , FastAPI , gRPC , and REST APIs . Containerize applications with Docker , deploy using Kubernetes , and orchestrate with CI/CD workflows. Ensure production-grade reliability, latency optimization, observability, and failover mechanisms. Cloud & MLOps Infrastructure Deploy on AWS SageMaker , Azure ML Studio , or Google Vertex AI , integrating with serverless and auto-scaling services. Own end-to-end MLOps pipelines : model training, versioning, monitoring, and retraining using MLflow , Kubeflow , or TFX . Cross-Functional Collaboration Partner with Product, Engineering, and Design teams to define AI-first experiences. Translate ambiguous business problems into structured ML/AI projects with measurable ROI. Contribute to roadmap planning, POCs, technical whitepapers, and architectural reviews. Technical Skillset Required Programming : Expert in Python , with strong OOP and data structure fundamentals. Frameworks : Proficient in PyTorch , TensorFlow , Hugging Face Transformers , LangChain , OpenAI/Anthropic APIs . NLP/LLM : Strong grasp of Transformer architecture , Attention mechanisms , self-supervised learning , and LLM evaluation techniques . MLOps : Skilled in CI/CD tools, FastAPI , Docker , Kubernetes , and deployment automation on AWS/Azure/GCP . Databases : Hands-on with SQL/NoSQL databases, Vector DBs , and retrieval systems. Tooling : Familiarity with Haystack , Rasa , Semantic Kernel , LangChain Agents , and memory-based orchestration for agents. Applied Research : Experience integrating recent GenAI research (AutoGPT-style agents, Toolformer, etc.) into production systems. Bonus Points Contributions to open-source NLP or LLM projects. Publications in AI/NLP/ML conferences or journals. Experience in Online Reputation Management (ORM) , martech, or CX platforms. Familiarity with reinforcement learning , multi-modal AI , or few-shot learning at scale. Show more Show less
Posted 1 day ago
3.0 years
0 Lacs
Kondapur, Telangana, India
On-site
What You'll Do Design & build backend components of our MLOps platform in Python on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What You Know At least 3+ years of professional backend development experience with Python. Experience with web development frameworks such as Flask or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with containers (Docker) and container platforms like AWS ECS or AWS EKS Experience with unit and functional testing frameworks. Experience with public cloud platforms like AWS. Experience with CI/CD practices, tools, and frameworks. Nice to have skills Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python. Education Bachelor’s degree in Computer Science, Information Systems, Engineering, Computer Applications, or related field. Benefits In addition to competitive salaries and benefits packages, Nisum India offers its employees some unique and fun extras: Continuous Learning - Year-round training sessions are offered as part of skill enhancement certifications sponsored by the company on an as need basis. We support our team to excel in their field. Parental Medical Insurance - Nisum believes our team is the heart of our business and we want to make sure to take care of the heart of theirs. We offer opt-in parental medical insurance in addition to our medical benefits. Activities -From the Nisum Premier League's cricket tournaments to hosted Hack-a-thon, Nisum employees can participate in a variety of team building activities such as skits, dances performance in addition to festival celebrations. Free Meals - Free snacks and dinner is provided on a daily basis, in addition to subsidized lunch. Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch . Develop Infrastructure as Code (IaC) using Terraform , and automate deployments with CI/CD pipelines . Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle —from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases . Implement frameworks for bias detection, explainability , and responsible AI . Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards , and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring , ing , and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills: 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services : EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git , and CI/CD pipelines . Proven track record in delivering AI/ML projects into production environments . Deep understanding of MLOps, model versioning, monitoring , and retraining pipelines . Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills: AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain , LLMs , Kubeflow , or GCP-based AI services . Skills Enterprise Architecture,Enterprise Architect,Aws,Python Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
Candidates ready to join immediately can share their details via email for quick processing. 📌 CCTC | ECTC | Notice Period | Location Preference nitin.patil@ust.com Act fast for immediate attention! ⏳📩 Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch . Develop Infrastructure as Code (IaC) using Terraform , and automate deployments with CI/CD pipelines . Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle —from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases . Implement frameworks for bias detection, explainability , and responsible AI . Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards , and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring , ing , and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills: 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services : EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git , and CI/CD pipelines . Proven track record in delivering AI/ML projects into production environments . Deep understanding of MLOps, model versioning, monitoring , and retraining pipelines . Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills: AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain , LLMs , Kubeflow , or GCP-based AI services . Skills Enterprise Architecture,Enterprise Architect,Aws,Python Show more Show less
Posted 1 day ago
0.0 - 2.0 years
0 Lacs
Hyderabad, Telangana
On-site
General information Country India State Telangana City Hyderabad Job ID 44779 Department Development Experience Level EXECUTIVE Employment Status FULL_TIME Workplace Type On-site Description & Requirements As an Associate Machine Learning Engineer / Data Scientist, you will contribute to the advancement of research projects in artificial intelligence and machine learning. Your responsibilities will encompass areas such as large language models, image processing, and sentiment analysis. You will work collaboratively with development partners to incorporate AI research into products such as Digital Assistant and Document Capture. Essential Duties: Model Development: Assist in designing and implementing AI/ML models. Contribute to building innovative models and integrating them into existing systems. Fine-tuning Models: Support the fine-tuning of pre-trained models for specific tasks and domains. Ensure models are optimized for accuracy and efficiency. Data Clean-up: Conduct data analysis and pre-processing to ensure the quality and relevance of training datasets. Implement data cleaning techniques. Natural Language Processing (NLP): Assist in the development of NLP tasks like sentiment analysis, text classification, and language understanding. Large Language Models (LLMs): Work with state-of-the-art LLMs and explore their applications in various domains. Support continuous improvement and adaptation of LLMs. Research and Innovation: Stay updated with advancements in AI/ML, NLP, and LLMs. Experiment with new approaches to solve complex problems and improve methodologies. Deployment and Monitoring: Collaborate with DevOps teams to deploy AI/ML models. Implement monitoring mechanisms to track model performance. Documentation: Maintain clear documentation of AI/ML processes, models, and improvements to ensure knowledge sharing and collaboration. Basic Qualifications: Educational Background Programming and Tools Experience 1-2 years of total industry experience Minimum 6 months experience in ML & Data Science Skills Problem-Solving and Analytical Skills Good oral and written communication skills. Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, Mathematics, Statistics or a related field. Specialization or coursework in AI, ML, Statistics & Probability, DL, Computer Vision, Signal Processing, or NLP/NLU is a plus. Proficiency in programming languages commonly used in AI and ML, such as Python or R & querying languages like SQL. Experience in Cloud computing infrastructures like AWS Sagemaker or Azure ML for implementing ML solutions is highly preferred. Experience with relevant libraries and frameworks, such as scikit-learn, Keras, TensorFlow, PyTorch, or NLTK is a plus. This role offers a great opportunity to work with cutting-edge AI/ML technologies and contribute to innovative projects in a collaborative environment. About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.
Posted 2 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Want to be on a team that full of results-driven individuals who are constantly seeking to innovate? Want to make an impact? At SailPoint, our Engineering team does just that. Our engineering is where high-quality professional engineering meets individual impact. Our team creates products are built on a mature, cloud-native event-driven microservices architecture hosted in AWS. SailPoint is seeking a Backend Software Engineer to help build a new cloud-based SaaS identity analytics product. We are looking for well-rounded backend or full stack engineers who are passionate about building and delivering reliable, scalable microservices and infrastructure for SaaS products. As one of the first members on the team, you will be integral in building this product and will be part of an agile team that is in startup mode. This is a unique opportunity to build something from scratch but have the backing of an organization that has the muscle to take it to market quickly, with a very satisfied customer base. Responsibilities Deliver efficient, maintainable data pipelines Deliver robust, bug free code Java based micro services Build and maintain Data Analytics and Machine Learning features Produce designs and rough estimates, and implement features based on product requirements. Collaborate with peers on designs, code reviews, and testing. Produce unit and end-to-end tests to improve code quality and maximize code coverage for new and existing features. Responsible for on-call production support Requirements 4+ years of professional software development experience Strong Python, SQL, Java experience Great communication skills BS in Computer Science, or a related field Comprehensive experience with object-oriented analysis and design skills Experience with Workflow engines Experience with Continuous Delivery, Source control Experience with Observability platforms for performance metrics collection and monitoring. Preferred Strong Experience in AirFlow, Snowflake, DBT Experience with ML Pipelines (SageMaker) Experience with Continuous Delivery Experience working on a Big Data/Machine Learning product Compensation and benefits Experience a Small-company Atmosphere with Big-company Benefits. Recharge your batteries with a flexible vacation policy and paid holidays. Grow with us with both technical and career growth opportunities. Enjoy a healthy work-life balance with flexible hours, family-friendly company events and charitable work. SailPoint is an equal opportunity employer and we welcome all qualified candidates to apply to join our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other category protected by applicable law. Alternative methods of applying for employment are available to individuals unable to submit an application through this site because of a disability. Contact hr@sailpoint.com or mail to 11120 Four Points Dr, Suite 100, Austin, TX 78726, to discuss reasonable accommodations. Show more Show less
Posted 2 days ago
10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Full Stack AI Architect Exp: 10yrs+ Location: Hyderabad/Chennai Summary: We are seeking a highly skilled Full Stack AI Architect to join our team and work under the guidance of Principal Architect to develop and deploy LLM Agents & Multi-Agent Frameworks. The ideal candidate will have 10+ years of experience in software engineering, with a strong focus on AI/ML. This role requires excellent problem-solving capabilities and a strong execution mindset to drive the development and deployment of AI-powered solutions. Responsibilities Collaborate with the Principal Architect to design and implement AI agents and multi-agent frameworks. Develop and maintain robust, scalable, and maintainable microservices architectures. Ensure seamless integration of AI agents with core systems and databases. Develop APIs and SDKs for internal and external consumption. Work closely with data scientists to fine-tune and optimize LLMs for specific tasks and domains. Implement ML Ops practices, including CI/CD pipelines, model versioning, and experiment tracking1. Design and implement comprehensive monitoring and observability solutions to track model performance, identify anomalies, and ensure system stability2. Utilize containerization technologies such as Docker and Kubernetes for efficient deployment and scaling of applications3. Leverage cloud platforms such as AWS, Azure, or GCP for infrastructure and services3. Design and implement data pipelines for efficient data ingestion, transformation, and storage4. Ensure data quality and security throughout the data lifecycle5. Mentor junior engineers and foster a culture of innovation, collaboration, and continuous learning. Qualifications 10+ years of experience in software engineering with a strong focus on AI/ML. Proficiency in frontend frameworks like React, Angular, or Vue.js. Strong hands-on experience with backend technologies like Node.js, Python (with frameworks like Flask, Django, or FastAPI), or Java. Experience with cloud platforms such as AWS, Azure, or GCP. Proven ability to design and implement complex, scalable, and maintainable architectures. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Passion for continuous learning and staying up to date with the latest advancements in AI/ML. End-to-end experience with at least one full AI stack on Azure, AWS, or GCP, including components such as Azure Machine Learning, AWS SageMaker, or Google AI Platform3. Hands-on experience with agent frameworks like Autogen, AWS Agent Framework, LangGraph etc. Experience with databases such as MongoDB, PostgreSQL, or similar technologies for efficient data management and integration. Illustrative Projects You May Have Worked On Successfully led the development and deployment of an AI-powered recommendation system using AWS SageMaker, integrating it with a Node.js backend and a React frontend. Designed and implemented a real-time fraud detection system on Azure, utilizing Azure Machine Learning for model training and Kubernetes for container orchestration. Developed a chatbot using Google AI Platform, integrating it with a Django backend and deploying it on GCP, ensuring seamless interaction with MongoDB for data storage Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
Job Description: AI/ML Specialist We are looking for a highly skilled and experienced AI/ML Specialist to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF). Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn. Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker. Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills And Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of professional experience as a AI/ML Specialist Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn. Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. Skills:- Artificial Intelligence (AI), pandas, Natural Language Processing (NLP), NumPy, Machine Learning (ML), TensorFlow, PyTorch and Python Show more Show less
Posted 2 days ago
3.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Pre-Sales Engineer - Cloud (AWS) Location : Noida, India (Hybdrid) Department: Sales/Engineering Reports To: Head of Sales Company Description Forasoftware, a trusted Microsoft and AWS partner, delivers comprehensive technology solutions that empower businesses across Ireland, the UK, India, and Türkiye. Our expertise spans Microsoft Azure, Microsoft 365, business intelligence, modern work, advanced security, helping organizations modernize IT, enhance collaboration, AWS and drive innovation. Forasoftware provides secure, scalable, and compliance-ready solutions tailored to your needs, ensuring you maximize your technology investments for growth and operational efficiency. Position Overview The Pre-sales Engineer, will be the technical bridge between our Sales Teams and their pre-sales customers. We are seeking a highly skilled and motivated Pre-Sales Engineer with expertise in Amazon Web Services (AWS) to join our dynamic team. The ideal candidate will have a strong technical background, excellent communication skills, and the ability to understand and address customer needs. Knowledge of Microsoft Azure products and relevant certifications will be considered a significant advantage. Experience Rich experience in delivering highest quality presales Support and Solution by bringing unique value on to the table for customers Strong understand and knowledge on AWS : Amazon EC2, AWS Lambda, Amazon Elastic Kubernetes Service (EKS) AWS : Amazon S3, Amazon EFS, Amazon Elastic Block Store (EBS) AWS : Amazon RDS, Amazon DynamoDB, Amazon Aurora AWS : Amazon VPC, Elastic Load Balancing (ELB), AWS Transit Gateway AWS : Amazon SageMaker, AWS AI Services (e.g., Amazon Rekognition, Amazon Lex) AWS : Amazon Redshift, Amazon Kinesis, AWS Glue AWS : Amazon CloudWatch, AWS Trusted Advisor, AWS Systems Manager Technical Expertise : Provide in-depth technical knowledge and support for AWS services, including but not limited to EC2, S3, RDS, and Lambda. Customer Engagement : Collaborate with the sales team to understand customer requirements and develop tailored solutions that address their needs. Solution Design : Design and present AWS-based solutions to customers, ensuring they meet both technical and business requirements. Demonstrations and POCs : Conduct product demonstrations and proof-of-concepts (POCs) to showcase the capabilities and benefits of AWS solutions. Documentation : Create and maintain technical documentation, including solution architectures, proposals, and presentations. Training and Enablement : Provide training and enablement sessions for customers and internal teams on AWS products and solutions. Competitive Analysis : Stay updated on industry trends, competitor products, and emerging technologies to provide insights and recommendations. Qualifications: Education : Bachelor's degree in Computer Science, Information Technology, or a related field. Experience : Minimum of 3-5 years of experience in a pre-sales or technical consulting role, with a focus on AWS. Certifications : AWS Certified Solutions Architect, AWS Certified DevOps Engineer, or other relevant AWS certifications. Bonus Points : Knowledge of Microsoft Azure products and certifications such as Azure Solutions Architect Expert or Azure DevOps Engineer Expert. Technical Skills : Proficiency in cloud architecture, networking, security, and automation. Experience with scripting languages such as Python or PowerShell is a plus. Soft Skills : Excellent communication, presentation, and interpersonal skills. Ability to work collaboratively in a team environment and manage multiple priorities. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Company Description Rayblaze Global Private Limited is a technology-driven solution provider that empowers businesses to streamline operations, enhance customer experiences, and achieve digital transformation. With a focus on tailored IT solutions since 2016, we offer web and mobile app development, custom CMS, e-commerce platforms, ERP systems, IoT solutions, SEO, and digital marketing. Our approach is guided by understanding our client's business goals, challenges, and budget to deliver impactful solutions across industries. Role Description This is a full-time on-site role for a Machine Learning Engineer located in Trivandrum. The Machine Learning Engineer will be responsible for developing algorithms, implementing neural networks, applying statistics, and utilizing pattern recognition techniques. The role will involve working on projects that optimize and innovate various business processes using machine learning technologies. Qualifications Skills in Pattern Recognition and Neural Networks Proficiency in Computer Science and Algorithms Knowledge of Statistics and its application in machine learning Strong analytical and problem-solving skills Experience in developing machine learning models and algorithms Ability to work collaboratively in a team environment Master's degree in Computer Science, Statistics, or related field Job description * Collect, clean, and prepare data. * Build and optimize machine learning models. * Use tools like TensorFlow and Scikit-learn. * Conduct data analysis and statistical assessments. * Deploy models and ensure performance. * Proficient in Python, SQL, and machine learning concepts. * Experience with cloud platforms and large datasets. * Familiarity with AWS SageMaker (preferred, not mandatory). Show more Show less
Posted 2 days ago
7.0 years
3 - 9 Lacs
Bengaluru
On-site
Bangalore,Karnataka,India Job ID 766481 Join our Team About this Opportunity The complexity of running and optimizing the next generation of wireless networks, such as 5G with distributed edge compute, will require Machine Learning (ML) and Artificial Intelligence (AI) technologies. Ericsson is setting up an AI Accelerator Hub in India to fast-track our strategy execution, using Machine Intelligence (MI) to drive thought leadership, automate, and transform Ericsson’s offerings and operations. We collaborate with academia and industry to develop state-of-the-art solutions that simplify and automate processes, creating new value through data insights. What you will do As a Senior Data Scientist, you will apply your knowledge of data science and ML tools backed with strong programming skills to solve real-world problems. Responsibilities: 1. Lead AI/ML features/capabilities in product/business areas 2. Define business metrics of success for AI/ML projects and translate them into model metrics 3. Lead end-to-end development and deployment of Generative AI solutions for enterprise use cases 4. Design and implement architectures for vector search, embedding models, and RAG systems 5. Fine-tune and evaluate large language models (LLMs) for domain-specific tasks 6. Collaborate with stakeholders to translate vague problems into concrete Generative AI use cases 7. Develop and deploy generative AI solutions using AWS services such as SageMaker, Bedrock, and other AWS AI tools. Provide technical expertise and guidance on implementing GenAI models and best practices within the AWS ecosystem. 8. Develop secure, scalable, and production-grade AI pipelines 9. Ensure ethical and responsible AI practices 10. Mentor junior team members in GenAI frameworks and best practices 11. Stay current with research and industry trends in Generative AI and apply cutting-edge techniques 12. Contribute to internal AI governance, tooling frameworks, and reusable components 13. Work with large datasets including petabytes of 4G/5G networks and IoT data 14. Propose/select/test predictive models and other ML systems 15. Define visualization and dashboarding requirements with business stakeholders 16. Build proof-of-concepts for business opportunities using AI/ML 17. Lead functional and technical analysis to define AI/ML-driven business opportunities 18. Work with multiple data sources and apply the right feature engineering to AI models 19. Lead studies and creative usage of new/existing data sources What you will bring Required Experience - min 7 years 1. Bachelors/Masters/Ph.D. in Computer Science, Data Science, AI, ML, Electrical Engineering, or related disciplines from reputed institutes 2. 3+ years of applied ML/AI production-level experience 3. Strong programming skills (R/Python) 4. Proven ability to lead AI/ML projects end-to-end 5. Strong grounding in mathematics, probability, and statistics 6. Hands-on experience with data analysis, visualization techniques, and ML frameworks (Python, R, H2O, Keras, TensorFlow, Spark ML) 7. Experience with semi-structured/unstructured data for AI/ML models 8. Strong understanding of building AI models using Deep Neural Networks 9. Experience with Big Data technologies (Hadoop, Cassandra) 10. Ability to source and combine data from multiple sources for ML models Preferred Qualifications: 1. Good communication skills in English 2. Certifying MI MOOCs, a plus 3. Domain knowledge in Telecommunication/IoT, a plus 4. Experience with data visualization and dashboard creation, a plus 5. Knowledge of Cognitive models, a plus 6. Experience in partnering and collaborative co-creation in a global matrix organization. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 2 days ago
0 years
0 Lacs
Pune, Maharashtra, India
Remote
Software Engineer - Privacy and AI Governance Role purpose As a Senior software Engineer (Privacy and AI governance) you will help design, develop and operationalize the technical foundations of Accelya’s Privacy by Design and AI governance programs. You will report directly to Accelya’s Data Protection Officer/AI Governance lead supporting the DPO enforce data protection, AI accountability and privacy by design principles. This role will also have a dotted line to Accelya’s Chief Technology Officers to ensure alignment with technical standards, priorities and devops workflows. This individual will develop effective relationships within the technology division working closely with the technology leadership and being able to embed controls effectively without slowing innovation. As Senior software Engineer (Privacy and AI governance) you will spearhead efforts to ensure that privacy is embedded into our software products’ DNA and that Accelya’s AI initiatives meet the requirements of applicable regulations (e.g., GDPR, EU AI act), AI best practice (such as NIST AI RMF), Accelya trust principles, and customer expectations. Duties & Responsibilities Use automated data discovery tools to identify personal or sensitive data flows Design and develop internal tools, APIs, and automation scripts to support: Data privacy workflows (e.g. DSARs, data lineage, consent management) AI/ML governance frameworks (e.g. model cards, audit logging, explainability checks) Review technical controls for data minimization, purpose limitation, access control, and retention policies. Build integrations with privacy and cloud compliance platforms (e.g. OneTrust, AWS Macie and SageMaker governance tools). Collaborate with the AI/ML teams to establish responsible AI development patterns, including bias detection, transparency, and model lifecycle governance. Contribute to privacy impact assessments (PIAs) and AI risk assessments by providing technical insights. Create dashboards and monitoring systems to flag potential policy or governance violations in pipelines. Support the DPO with technical implementation of GDPR, CCPA, and other data protection regulations. Collaborate with legal, privacy, and engineering teams to prioritize risks and translate findings into clear, actionable remediation plans. Knowledge, Experience & Skills: Must-Haves: Proven software engineering experience, ideally in backend or systems engineering roles Strong programming skills (e.g. Python, Java, or TypeScript) Familiarity with data privacy and protection concepts (e.g., pseudonymization, access logging, encryption) Understanding of AI/ML lifecycle Experience working with cloud environments (especially AWS) Ability to translate legal/policy requirements into technical designs Nice-to-Haves: Experience with privacy or GRC tools (e.g., OneTrust, or BigID) Knowledge of machine learning fairness, explainability, and AI risk frameworks Exposure to data governance frameworks (e.g., NIST AI RMF, ISO/IEC 42001) Prior work with privacy-enhancing technologies (PETs), e.g., differential privacy or federated learning What do we offer? Open culture and challenging opportunity to satisfy intellectual needs Flexible working hours Smart working: hybrid remote/office working environment Work-life balance Excellent, dynamic and multicultural environment About Accelya Accelya is a leading global software provider to the airline industry, powering 200+ airlines with an open, modular software platform that enables innovative airlines to drive growth, delight their customers and take control of their retailing. Owned by Vista Equity Partners long-term perennial fund and with 2K+ employees based around 10 global offices, Accelya are trusted by industry leaders to deliver now and deliver for the future. The company´s passenger, cargo, and industry platforms support airline retailing from offer to settlement, both above and below the wing. Accelya are proud to deliver leading-edge technologies to our customers including through our partnership with AWS and through the pioneering NDC expertise of our Global Product teams. We are proud to enable innovation-led growth for the airline industry and put control back in the hands of airlines. For more information, please visit www.accelya.com Show more Show less
Posted 2 days ago
4.5 - 6.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Roles & Responsibilities Have hands on experience on real time ML Models / Projects Coding in Python Language, Machine Learning, Basic SQL, Git, MS Excel Experience in using IDE like Jupyter Notebook, Spyder, PyCharm Hands on with AWS Services like S3 bucket, EC2, Sagemaker, Step Functions. Engage with clients/consultants to understand requirements Taking ownership of delivering ML models with high precision outcomes. Accountable for high quality and timely completion of specified work deliverables Write codes that are well detailed structured and compute efficient Experience 4.5-6 Years Skills Primary Skill: AI/ML Development Sub Skill(s): AI/ML Development Additional Skill(s): AI/ML Development, TensorFlow, NLP, Pytorch About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru. Show more Show less
Posted 2 days ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description RESPONSIBILITIES Design and implement CI/CD pipelines for AI and ML model training, evaluation, and RAG system deployment (including LLMs, vectorDB, embedding and reranking models, governance and observability systems, and guardrails). Provision and manage AI infrastructure across cloud hyperscalers (AWS/GCP), using infrastructure-as-code tools -strong preference for Terraform-. Maintain containerized environments (Docker, Kubernetes) optimized for GPU workloads and distributed compute. Support vector database, feature store, and embedding store deployments (e.g., pgVector, Pinecone, Redis, Featureform. MongoDB Atlas, etc). Monitor and optimize performance, availability, and cost of AI workloads, using observability tools (e.g., Prometheus, Grafana, Datadog, or managed cloud offerings). Collaborate with data scientists, AI/ML engineers, and other members of the platform team to ensure smooth transitions from experimentation to production. Implement security best practices including secrets management, model access control, data encryption, and audit logging for AI pipelines. Help support the deployment and orchestration of agentic AI systems (LangChain, LangGraph, CrewAI, Copilot Studio, AgentSpace, etc.). Must Haves: 4+ years of DevOps, MLOps, or infrastructure engineering experience. Preferably with 2+ years in AI/ML environments. Hands-on experience with cloud-native services (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure ML) and GPU infrastructure management. Strong skills in CI/CD tools (GitHub Actions, ArgoCD, Jenkins) and configuration management (Ansible, Helm, etc.). Proficient in scripting languages like Python, Bash, -Go or similar is a nice plus-. Experience with monitoring, logging, and alerting systems for AI/ML workloads. Deep understanding of Kubernetes and container lifecycle management. Bonus Attributes: Exposure to MLOps tooling such as MLflow, Kubeflow, SageMaker Pipelines, or Vertex Pipelines. Familiarity with prompt engineering, model fine-tuning, and inference serving. Experience with secure AI deployment and compliance frameworks Knowledge of model versioning, drift detection, and scalable rollback strategies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2