Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 years
2 - 7 Lacs
Chennai
On-site
An Amazing Career Opportunity for AI/ML Engineer Location: Chennai, India (Hybrid) Job ID: 39582 Position Summary A rewarding career at HID Global beckons you! We are looking for an AI/ML Engineer , who is responsible for designing, developing, and deploying advanced AI/ML solutions to solve complex business challenges. This role requires expertise in machine learning, deep learning, MLOps, and AI model optimization , with a focus on building scalable, high-performance AI systems. As an AI/ML Engineer , you will work closely with data engineers, software developers, and business stakeholders to integrate AI-driven insights into real-world applications. You will be responsible for model development, system architecture, cloud deployment, and ensuring responsible AI adoption . We are a leading company in the trusted source for innovative HID Global Human Resources products, solutions and services that help millions of customers around the globe create, manage and use secure identities. Who are we? HID powers the trusted identities of the world’s people, places, and things, allowing people to transact safely, work productively and travel freely. We are a high-tech software company headquartered in Austin, TX, with over 4,000 worldwide employees. Check us out: www.hidglobal.com and https://youtu.be/23km5H4K9Eo LinkedIn: www.linkedin.com/company/hidglobal/mycompany/ About HID Global, Chennai HID Global powers the trusted identities of the world’s people, places and things. We make it possible for people to transact safely, work productively and travel freely. Our trusted identity solutions give people secure and convenient access to physical and digital places and connect things that can be accurately identified, verified and tracked digitally. Millions of people around the world use HID products and services to navigate their everyday lives, and over 2 billion things are connected through HID technology. We work with governments, educational institutions, hospitals, financial institutions, industrial businesses and some of the most innovative companies on the planet. Headquartered in Austin, Texas, HID Global has over 3,000 employees worldwide and operates international offices that support more than 100 countries. HID Global® is an ASSA ABLOY Group brand. For more information, visit www.hidglobal.com. HID Global has is the trusted source for secure identity solutions for millions of customers and users around the world. In India, we have two Engineering Centre (Bangalore and Chennai) over 200+ Engineering Staff. Global Engineering Team is based in Chennai and one of the Business Unit Engineering team is based in Bangalore. Physical Access Control Solutions (PACS) HID's Physical Access Control Solutions Business Area: HID PAC’s Business Unit focuses on the growth of new clients and existing clients where we leverage the latest card and reader technologies to solve the security challenges of our clients. Other areas of focus will include authentication, card sub systems, card encoding, Biometrics, location services and all other aspects of a physical access control infrastructure. Qualifications:- To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Roles & Responsibilities: Design, develop, and deploy robust & scalable AI/ML models in Production environments. Collaborate with business stakeholders to identify AI/ML opportunities and define measurable success metrics. Design and build Retrieval-Augmented Generation (RAG) pipelines integrating vector stores, semantic search, and document parsing for domain-specific knowledge retrieval. Integrate Multimodal Conversational AI platforms (MCP) including voice, vision, and text to deliver rich user interactions. Drive innovation through PoCs, benchmarking, and experiments with emerging models and architectures. Optimize models for performance, latency and scalability. Build data pipelines and workflows to support model training and evaluation. Conduct research & experimentation on the state-of-the-art techniques (DL, NLP, Time series, CV) Partner with MLOps and DevOps teams to implement best practices in model monitoring, version and re-training. Lead code reviews, architecture discussions and mentor junior & peer engineers. Architect and implement end-to-end AI/ML pipelines, ensuring scalability and efficiency. Deploy models in cloud-based (AWS, Azure, GCP) or on-premises environments using tools like Docker, Kubernetes, TensorFlow Serving, or ONNX Ensure data integrity, quality, and preprocessing best practices for AI/ML model development. Ensure compliance with AI ethics guidelines, data privacy laws (GDPR, CCPA), and corporate AI governance. Work closely with data engineers, software developers, and domain experts to integrate AI into existing systems. Conduct AI/ML training sessions for internal teams to improve AI literacy within the organization. Strong analytical and problem solving mindset. Technical Requirements: Strong expertise in AI/ML engineering and software development. Strong experience with RAG architecture, vector databases Proficiency in Python and hands-on experience in using ML frameworks (tensorflow, pytorch, scikit-learn, xgboost etc) Familiarity with MCPs like Google Dialogflow, Rasa, Amazon Lex, or custom-built agents using LLM orchestration. Cloud-based AI/ML experience (AWS Sagemaker, Azure ML, GCP Vertex AI, etc.). Solid understanding of AI/ML life cycle – Data preprocessing, feature engineering, model selection, training, validation and deployment. Experience in production grade ML systems (Model serving, APIs, Pipelines) Familiarity with Data engineering tools (SPARK, Kafka, Airflow etc) Strong knowledge of statistical modeling, NLP, CV, Recommendation systems, Anomaly detection and time series forecasting. Hands-on in Software engineering with knowledge of version control, testing & CI/CD Hands-on experience in deploying ML models in production using Docker, Kubernetes, TensorFlow Serving, ONNX, and MLflow. Experience in MLOps & CI/CD for ML pipelines, including monitoring, retraining, and model drift detection. Proficiency in scaling AI solutions in cloud environments (AWS, Azure & GCP). Experience in data preprocessing, feature engineering, and dimensionality reduction. Exposure to Data privacy, Compliance and Secure ML practices Education and/or Experience: Graduation or master’s in computer science or information technology or AI/ML/Data science 3+ years of hands-on experience in AI/ML development/deployment and optimization Experience in leading AI/ML teams and mentoring junior engineers. Why apply? Empowerment: You’ll work as part of a global team in a flexible work environment, learning and enhancing your expertise. We welcome an opportunity to meet you and learn about your unique talents, skills, and experiences. You don’t need to check all the boxes. If you have most of the skills and experience, we want you to apply. Innovation : You embrace challenges and want to drive change. We are open to ideas, including flexible work arrangements, job sharing or part-time job seekers. Integrity: You are results-orientated, reliable, and straightforward and value being treated accordingly. We want all our employees to be themselves, to feel appreciated and accepted. This opportunity may be open to flexible working arrangements. HID is an Equal Opportunity/Affirmative Action Employer – Minority/Female/Disability/Veteran/Gender Identity/Sexual Orientation. We make it easier for people to get where they want to go! On an average day, think of how many times you tap, twist, tag, push or swipe to get access, find information, connect with others or track something. HID technology is behind billions of interactions, in more than 100 countries. We help you create a verified, trusted identity that can get you where you need to go – without having to think about it. When you join our HID team, you’ll also be part of the ASSA ABLOY Group, the global leader in access solutions. You’ll have 63,000 colleagues in more than 70 different countries. We empower our people to build their career around their aspirations and our ambitions – supporting them with regular feedback, training, and development opportunities. Our colleagues think broadly about where they can make the most impact, and we encourage them to grow their role locally, regionally, or even internationally. As we welcome new people on board, it’s important to us to have diverse, inclusive teams, and we value different perspectives and experiences. #LI-HIDGlobal
Posted 3 weeks ago
6.0 years
60 - 65 Lacs
India
Remote
Experience : 6.00 + years Salary : INR 6000000-6500000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Crop.Photo) (*Note: This is a requirement for one of Uplers' client - Crop.Photo) What do you need for this opportunity? Must have skills required: MLOps, Python, Scalability, VectorDBs, FAISS, Pinecone/ Weaviate/ FAISS/ ChromaDB, Elasticsearch, Open search Crop.Photo is Looking for: Technical Lead for Evolphin AI-Driven MAM At Evolphin, we build powerful media asset management solutions used by some of the world’s largest broadcasters, creative agencies, and global brands. Our flagship platform, Zoom, helps teams manage high-volume media workflows—from ingest to archive—with precision, performance, and AI-powered search. We’re now entering a major modernization phase, and we’re looking for an exceptional Technical Lead to own and drive the next-generation database layer powering Evolphin Zoom. This is a rare opportunity to take a critical backend system that serves high-throughput media operations and evolve it to meet the scale, speed, and intelligence today’s content teams demand. What you’ll own Leading the re-architecture of Zoom’s database foundation with a focus on scalability, query performance, and vector-based search support Replacing or refactoring our current in-house object store and metadata database to a modern, high-performance elastic solution Collaborating closely with our core platform engineers and AI/search teams to ensure seamless integration and zero disruption to existing media workflows Designing an extensible system that supports object-style relationships across millions of assets, including LLM-generated digital asset summaries, time-coded video metadata, AI generated tags, and semantic vectors Driving end-to-end implementation: schema design, migration tooling, performance benchmarking, and production rollout—all with aggressive timelines Skills & Experience We Expect We’re looking for candidates with 7–10 years of hands-on engineering experience, including 3+ years in a technical leadership role. Your experience should span the following core areas: System Design & Architecture (3–4 yrs) Strong hands-on experience with the Java/JVM stack (GC tuning), Python in production environments Led system-level design for scalable, modular AWS microservices architectures Designed high-throughput, low-latency media pipelines capable of scaling to billions of media records Familiar with multitenant SaaS patterns, service decomposition, and elastic scale-out/in models Deep understanding of infrastructure observability, failure handling, and graceful degradation Database & Metadata Layer Design (3–5 yrs) Experience redesigning or implementing object-style metadata stores used in MAM/DAM systems Strong grasp of schema-less models for asset relationships, time-coded metadata, and versioned updates Practical experience with DynamoDB, Aurora, PostgreSQL, or similar high-scale databases Comfortable evaluating trade-offs between memory, query latency, and write throughput Semantic Search & Vectors (1–3 yrs) Implemented vector search using systems like Weaviate, Pinecone, Qdrant, or Faiss Able to design hybrid (structured + semantic) search pipelines for similarity and natural language use cases Experience tuning vector indexers for performance, memory footprint, and recall Familiar with the basics of embedding generation pipelines and how they are used for semantic search and similarity-based retrieval Worked with MLOps teams to deploy ML inference services (e.g., FastAPI/Docker + GPU-based EC2 or SageMaker endpoints) Understands the limitations of recognition models (e.g., OCR, face/object detection, logo recognition), even if not directly building them Media Asset Workflow (2–4 yrs) Deep familiarity with broadcast and OTT formats: MXF, IMF, DNxHD, ProRes, H.264, HEVC Understanding of proxy workflows in video post-production Experience with digital asset lifecycle: ingest, AI metadata enrichment, media transformation, S3 cloud archiving Hands-on experience working with time-coded metadata (e.g., subtitles, AI tags, shot changes) management in media archives Cloud-Native Architecture (AWS) (3–5 yrs) Strong hands-on experience with ECS, Fargate, Lambda, S3, DynamoDB, Aurora, SQS, EventBridge Experience building serverless or service-based compute models for elastic scaling Familiarity with managing multi-region deployments, failover, and IAM configuration Built cloud-native CI/CD deployment pipelines with event-driven microservices and queue-based workflows Frontend Collaboration & React App Integration (2–3 yrs) Worked closely with React-based frontend teams, especially on desktop-style web applications Familiar with component-based design systems, REST/GraphQL API integration, and optimizing media-heavy UI workflows Able to guide frontend teams on data modeling, caching, and efficient rendering of large asset libraries Experience with Electron for desktop apps How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
3.0 years
16 - 20 Lacs
India
Remote
Experience : 3.00 + years Salary : INR 1600000-2000000 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: SenseCloud) (*Note: This is a requirement for one of Uplers' client - A Seed-Funded B2B SaaS Company – Procurement Analytics) What do you need for this opportunity? Must have skills required: open-source, Palantir, privacy techniques, rag, Snowflake, LangChain, LLM, MLOps, AWS, Docker, Python A Seed-Funded B2B SaaS Company – Procurement Analytics is Looking for: Join the Team Revolutionizing Procurement Analytics at SenseCloud Imagine working at a company where you get the best of all worlds: the fast-paced execution of a startup and the guidance of leaders who’ve built things that actually work at scale. We’re not just rethinking how procurement analytics is done — we’re redefining them. At Sensecloud, we envision a future where Procurement data management and analytics is as intuitive as your favorite app. No more complex spreadsheets, no more waiting in line to get IT and analytics teams’ attention, no more clunky dashboards —just real-time insights, smooth automation, and a frictionless experience that helps companies make fast decisions. If you’re ready to help us build the future of procurement analytics, come join the ride. You'll work alongside the brightest minds in the industry, learn cutting-edge technologies, and be empowered to take on challenges that will stretch your skills and your thinking. If you’re ready to help us build the future of procurement, analytics come join the ride. About The Role We’re looking for an AI Engineer who can design, implement, and productionize LLM-powered agents that solve real-world enterprise problems—think automated research assistants, data-driven copilots, and workflow optimizers. You’ll own projects end-to-end: scoping, prototyping, evaluating, and deploying scalable agent pipelines that integrate seamlessly with our customers’ ecosystems. What you'll do: Architect & build multi-agent systems using frameworks such as LangChain, LangGraph, AutoGen, Google ADK, Palantir Foundry, or custom orchestration layers. Fine-tune and prompt-engineer LLMs (OpenAI, Anthropic, open-source) for retrieval-augmented generation (RAG), reasoning, and tool use. Integrate agents with enterprise data sources (APIs, SQL/NoSQL DBs, vector stores like Pinecone, Elasticsearch) and downstream applications (Snowflake, ServiceNow, custom APIs). Own the MLOps lifecycle: containerize (Docker), automate CI/CD, monitor drift & hallucinations, set up guardrails, observability, and rollback strategies. Collaborate cross-functionally with product, UX, and customer teams to translate requirements into robust agent capabilities and user-facing features. Benchmark & iterate on latency, cost, and accuracy; design experiments, run A/B tests, and present findings to stakeholders. Stay current with the rapidly evolving GenAI landscape and champion best practices in ethical AI, data privacy, and security. Must-Have Technical Skills 3–5 years software engineering or ML experience in production environments. Strong Python skills (async I/O, typing, testing) plus familiarity with TypeScript/Node or Go a bonus. Hands-on with at least one LLM/agent frameworks and platforms (LangChain, LangGraph, Google ADK, LlamaIndex, Emma, etc.). Solid grasp of vector databases (Pinecone, Weaviate, FAISS) and embedding models. Experience building and securing REST/GraphQL APIs and microservices. Cloud skills on AWS, Azure, or GCP (serverless, IAM, networking, cost optimization). Proficient with Git, Docker, CI/CD (GitHub Actions, GitLab CI, or similar). Knowledge of ML Ops tooling (Kubeflow, MLflow, SageMaker, Vertex AI) or equivalent custom pipelines. Core Soft Skills Product mindset: translate ambiguous requirements into clear deliverables and user value. Communication: explain complex AI concepts to both engineers and executives; write crisp documentation. Collaboration & ownership: thrive in cross-disciplinary teams, proactively unblock yourself and others. Bias for action: experiment quickly, measure, iterate—without sacrificing quality or security. Growth attitude: stay curious, seek feedback, mentor juniors, and adapt to the fast-moving GenAI space. Nice-to-Haves Experience with RAG pipelines over enterprise knowledge bases (SharePoint, Confluence, Snowflake). Hands-on with MCP servers/clients, MCP Toolbox for Databases, or similar gateway patterns. Familiarity with LLM evaluation frameworks (LangSmith, TruLens, Ragas). Familiarity with Palantir/Foundry. Knowledge of privacy-enhancing techniques (data anonymization, differential privacy). Prior work on conversational UX, prompt marketplaces, or agent simulators. Contributions to open-source AI projects or published research. Why Join Us? Direct impact on products used by Fortune 500 teams. Work with cutting-edge models and shape best practices for enterprise AI agents. Collaborative culture that values experimentation, continuous learning, and work–life balance. Competitive salary, equity, remote-first flexibility, and professional development budget. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 3 weeks ago
3.0 years
0 Lacs
India
Remote
AWS Data Engineer Location: Remote (India) Experience: 3+ Years Employment Type: Full-Time About the Role: We are seeking a talented AWS Data Engineer with at least 3 years of hands-on experience in building and managing data pipelines using AWS services. This role involves working with large-scale data, integrating multiple data sources (including sensor/IoT data), and enabling efficient, secure, and analytics-ready solutions. Experience in the energy industry or working with time-series/sensor data is a strong plus. Key Responsibilities: Build and maintain scalable ETL/ELT data pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Process and integrate structured and unstructured data, including sensor/IoT and real-time streams Optimize pipeline performance and ensure reliability and fault tolerance Collaborate with cross-functional teams including data scientists and analysts Perform data transformations using Python, Pandas, and SQL Maintain data integrity, quality, and security across the platform Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Support and monitor pipeline workflows, troubleshoot issues, and implement fixes Contribute to the adoption of emerging tools like AWS Bedrock, Textract, Rekognition, and GenAI solutions Required Skills and Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field 3+ years of experience in data engineering using AWS Strong skills in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Solid understanding of data modeling, warehousing, and pipeline orchestration Experience with version control (Git) and infrastructure as code (Terraform) Preferred Skills: Experience working with energy sector dat a or IoT/sensor-based data Exposure to machine learnin g tools and frameworks (e.g., SageMaker, TensorFlow, Scikit-learn) Familiarity with big data technologie s like Apache Spark, Kafka Experience with data visualization tool s (Tableau, Power BI, AWS QuickSight) Awareness of data governance and catalog tool s such as AWS Data Quality, Collibra, and AWS Databrew AWS Certifications (Data Analytics, Solutions Architect
Posted 3 weeks ago
0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Education AI / ML Engineer - Required Skills & Qualifications: Bachelor’s or Master’s in Computer Science, Engineering, Data Science, AI/ML, Mathematics, or related field Technical Skills Proficient in Python and ML libraries: scikit-learn, XGBoost, pandas, NumPy, matplotlib, seaborn, etc. Strong understanding of machine learning algorithms and deep learning architectures (CNNs, RNNs, Transformers, etc.) Hands-on with TensorFlow , PyTorch , or Keras Experience in data preprocessing , feature selection , EDA , and model interpretability Comfortable with API development and deploying models using Flask, FastAPI, or similar Experience with MLOps tools like MLflow , Kubeflow , DVC , Airflow , etc. Familiarity with cloud platforms like AWS (SageMaker, S3, Lambda), GCP (Vertex AI), or Azure ML Strong understanding of version control (Git), CI/CD, and containerization (Docker) Bonus Skills (Good To Have) NLP frameworks (e.g., spaCy, NLTK, Hugging Face Transformers) Computer Vision experience using OpenCV or YOLO/Detectron Knowledge of Reinforcement Learning or Generative AI (GANs, LLMs) Experience with vector databases (e.g., Pinecone, Weaviate) and LangChain for AI agent building Familiarity with data labeling platforms and annotation workflows Soft Skills Analytical mindset with problem-solving skills Strong communication and collaboration abilities Ability to work independently in a fast-paced, agile environment Passion for AI/ML and eagerness to stay updated with the latest developments Skills: fastapi,pandas,ml libraries,pinecone,docker,langchain,matplotlib,yolo,numpy,yolo/detectron,nltk,machine learning,spacy,vertex ai,feature selection,lambda,gans,tensorflow,airflow,weaviate,data labeling,nlp frameworks,seaborn,python,git,ai,cnns,xgboost,model interpretability,deep learning architectures,dvc,generative ai,opencv,s3,detectron,aws,data preprocessing,api development,ci/cd,gcp,scikit-learn,transformers,vector databases,mlops tools,mlflow,sagemaker,machine learning algorithms,hugging face transformers,kubeflow,eda,annotation workflows,llms,containerization,reinforcement learning,ml,pytorch,flask,rnns,azure ml,keras
Posted 3 weeks ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a visionary AI Architect to lead the design and integration of cutting-edge AI systems, including Generative AI , Large Language Models (LLMs) , multi-agent orchestration , and retrieval-augmented generation (RAG) frameworks. This role demands a strong technical foundation in machine learning, deep learning, and AI infrastructure, along with hands-on experience in building scalable, production-grade AI systems on the cloud. The ideal candidate combines architectural leadership with hands-on proficiency in modern AI frameworks, and can translate complex business goals into innovative, AI-driven technical solutions. Primary Stack & Tools: Languages : Python, SQL, Bash ML/AI Frameworks : PyTorch, TensorFlow, Scikit-learn, Hugging Face Transformers GenAI & LLM Tooling : OpenAI APIs, LangChain, LlamaIndex, Cohere, Claude, Azure OpenAI Agentic & Multi-Agent Frameworks : LangGraph, CrewAI, Agno, AutoGen Search & Retrieval : FAISS, Pinecone, Weaviate, Elasticsearch Cloud Platforms : AWS, GCP, Azure (preferred: Vertex AI, SageMaker, Bedrock) MLOps & DevOps : MLflow, Kubeflow, Docker, Kubernetes, CI/CD pipelines, Terraform, FAST API Data Tools : Snowflake, BigQuery, Spark, Airflow Key Responsibilities: Architect scalable and secure AI systems leveraging LLMs , GenAI , and multi-agent frameworks to support diverse enterprise use cases (e.g., automation, personalization, intelligent search). Design and oversee implementation of retrieval-augmented generation (RAG) pipelines integrating vector databases, LLMs, and proprietary knowledge bases. Build robust agentic workflows using tools like LangGraph , CrewAI , or Agno , enabling autonomous task execution, planning, memory, and tool use. Collaborate with product, engineering, and data teams to translate business requirements into architectural blueprints and technical roadmaps. Define and enforce AI/ML infrastructure best practices , including security, scalability, observability, and model governance. Manage technical road-map, sprint cadence, and 3–5 AI engineers; coach on best practices. Lead AI solution design reviews and ensure alignment with compliance, ethics, and responsible AI standards. Evaluate emerging GenAI & agentic tools; run proofs-of-concept and guide build-vs-buy decisions. Qualifications: 10+ years of experience in AI/ML engineering or data science, with 3+ years in AI architecture or system design. Proven experience designing and deploying LLM-based solutions at scale, including fine-tuning , prompt engineering , and RAG-based systems . Strong understanding of agentic AI design principles , multi-agent orchestration , and tool-augmented LLMs . Proficiency with cloud-native ML/AI services and infrastructure design across AWS, GCP, or Azure. Deep expertise in model lifecycle management, MLOps, and deployment workflows (batch, real-time, streaming). Familiarity with data governance , AI ethics , and security considerations in production-grade systems. Excellent communication and leadership skills, with the ability to influence technical and business stakeholders.
Posted 3 weeks ago
3.0 years
0 Lacs
India
Remote
Job Title: AWS Data Engineer 📍 Location: Remote (India) 🕒 Experience: 3+ Years 💼 Employment Type: Full-Time About the Role: We’re looking for a skilled AWS Data Engineer with 3+ years of hands-on experience in building and managing robust, scalable data pipelines using AWS services. The ideal candidate will have a strong foundation in processing both structured and unstructured data, particularly from IoT/sensor sources. Experience in the energy sector and with time-series data is highly desirable. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines using AWS Glue, Redshift, Lambda, EMR, S3, and Athena Integrate and process structured, unstructured, and real-time sensor/IoT data Ensure pipeline performance, reliability, and fault tolerance Collaborate with data scientists, analysts, and engineering teams to build analytics-ready solutions Transform data using Python, Pandas , and SQL Enforce data integrity, quality, and security standards Use Terraform and CI/CD tools (e.g., Azure DevOps) for infrastructure and deployment automation Monitor workflows, troubleshoot pipeline issues, and implement solutions Explore and contribute to the use of modern AWS tools like Bedrock, Textract, Rekognition , and GenAI applications Required Skills & Qualifications: Bachelor’s/Master’s in Computer Science, IT, or related field Minimum 3 years of experience in AWS data engineering Proficient in: AWS Glue, Redshift, S3, Lambda, EMR, Athena Python, Pandas, SQL RDS, Postgres, SAP HANA Strong knowledge of data modeling, warehousing, and pipeline orchestration Experience with Git and Infrastructure as Code using Terraform Preferred Skills: Experience with energy sector data or sensor-based/IoT data Exposure to ML tools like SageMaker, TensorFlow, Scikit-learn Familiarity with Apache Spark, Kafka Experience with data visualization tools: Tableau, Power BI, AWS QuickSight Awareness of data governance tools like AWS Data Quality, Collibra, Databrew AWS certifications (e.g., Data Analytics Specialty, Solutions Architect Associate)
Posted 3 weeks ago
0.0 - 3.0 years
0 Lacs
Hyderabad, Telangana
On-site
General information Country India State Telangana City Hyderabad Job ID 44314 Department Development Description & Requirements Summary: As an AI/ML Developer, you’ll play a pivotal role in creating and delivering cutting-edge enterprise applications and automations using Infor’s AI, RPA, and OS platform technology. Your mission will be to identify innovative use cases, develop proof of concepts (PoCs), and deliver enterprise automation solutions that elevate workforce productivity and improve business performance for our customers. Key Responsibilities: Use Case Identification: Dive deep into customer requirements and business challenges. Identify innovative use cases that can be addressed through AI/ML solutions. Data Insights: Perform exploratory data analysis on large and complex datasets. Assess data quality, extract insights, and share findings. Data Preparation: Gather relevant datasets for training and testing. Clean, preprocess, and augment data to ensure suitability for AI tasks. Model Development: Train and fine-tune AI/ML models. Evaluate performance using appropriate metrics and benchmarks, optimizing for efficiency. Integration and Deployment: Collaborate with software engineers and developers to seamlessly integrate AI/ML models into enterprise systems and applications. Handle production deployment challenges. Continuous Improvement: Evaluate and enhance the performance and capabilities of deployed AI products. Monitor user feedback and iterate on models and algorithms to address limitations and enhance user experience. Proof of Concepts (PoCs): Develop PoCs to validate the feasibility and effectiveness of proposed solutions. Showcase the art of the possible to our clients. Collaboration with Development Teams: Work closely with development teams on new use cases. Best Practices and Requirements: Collaborate with team members to determine best practices and requirements. Innovation: Contribute to our efforts in enterprise automation and cloud innovation. Key Requirements: Experience: A minimum 3 years of hands-on experience in implementing AI/ML models in enterprise systems. AI/ML Concepts: In-depth understanding of supervised and unsupervised learning, reinforcement learning, deep learning, and probabilistic models. Programming Languages: Proficiency in Python or R, along with querying languages like SQL. Data Handling: Ability to work with large datasets, perform data preprocessing, and wrangle data effectively. Cloud Infrastructure: Experience with AWS Sagemaker or Azure ML for implementing ML solutions is highly preferred. Frameworks and Libraries: Familiarity with scikit-learn, Keras, TensorFlow, PyTorch, or NLTK is a plus. Analytical Skills: Strong critical thinking abilities to identify problems, formulate hypotheses, and design experiments. Business Process Understanding: Good understanding of business processes and how they can be automated. Domain Expertise: Familiarity with Demand Forecasting, Anomaly Detection, Pricing, Recommendation, or Analytics solutions. Global Project Experience: Proven track record of working with global customers on multiple projects. Customer Interaction: Experience facing customers and understanding their needs. Communication Skills: Excellent verbal and written communication skills. Analytical Mindset: Strong analytical and problem-solving skills. Collaboration: Ability to work independently and collaboratively. Educational Background: Bachelor’s or Master’s degree in Computer Science, Mathematics, Statistics, or a related field. Specialization: Coursework or specialization in AI, ML, Statistics & Probability, Deep Learning, Computer Vision, or NLP/NLU is advantageous. About Infor Infor is a global leader in business cloud software products for companies in industry specific markets. Infor builds complete industry suites in the cloud and efficiently deploys technology that puts the user experience first, leverages data science, and integrates easily into existing systems. Over 60,000 organizations worldwide rely on Infor to help overcome market disruptions and achieve business-wide digital transformation. For more information visit www.infor.com Our Values At Infor, we strive for an environment that is founded on a business philosophy called Principle Based Management™ (PBM™) and eight Guiding Principles: integrity, stewardship & compliance, transformation, principled entrepreneurship, knowledge, humility, respect, self-actualization. Increasing diversity is important to reflect our markets, customers, partners, and communities we serve in now and in the future. We have a relentless commitment to a culture based on PBM. Informed by the principles that allow a free and open society to flourish, PBM™ prepares individuals to innovate, improve, and transform while fostering a healthy, growing organization that creates long-term value for its clients and supporters and fulfillment for its employees. Infor is an Equal Opportunity Employer. We are committed to creating a diverse and inclusive work environment. Infor does not discriminate against candidates or employees because of their sex, race, gender identity, disability, age, sexual orientation, religion, national origin, veteran status, or any other protected status under the law. If you require accommodation or assistance at any time during the application or selection processes, please submit a request by following the directions located in the FAQ section at the bottom of the infor.com/about/careers webpage.
Posted 3 weeks ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
GE Healthcare Healthcare Information Technology Category Digital Technology / IT Mid-Career Job Id R4026614 Relocation Assistance No Location Bengaluru, Karnataka, India, 560066 Job Description Summary We are seeking a highly skilled and innovative AI Engineer with expertise in both traditional Artificial Intelligence and emerging Generative AI technologies. In this role, you will be responsible for designing, developing, and deploying intelligent systems that leverage machine learning, deep learning, and generative models to solve complex problems. You will work across the AI lifecycle—from data engineering and model development to deployment and monitoring—while also exploring GenAI applications, Agentic AI and developing agentic platforms. The ideal candidate combines strong technical acumen with a passion for experimentation, rapid prototyping, and delivering scalable AI solutions in real-world environments. GE HealthCare is a leading global medical technology and digital solutions innovator. Our purpose is to create a world where healthcare has no limits. Unlock your ambition, turn ideas into world-changing realities, and join an organization where every voice makes a difference, and every difference builds a healthier world. Job Description Job Description Roles and Responsibilities In this role, you will: Develop and fine-tune Generative AI models (e.g., LLMs, diffusion models). Design and implement machine learning models for classification, regression, clustering, and recommendation tasks. Build and maintain scalable AI pipelines for data ingestion, training, evaluation, and deployment. Collaborate with cross-functional teams to understand business needs and translate them into AI solutions. Ensure model performance, fairness, and explainability through rigorous testing and validation. Deploy models to production using MLOps tools and monitor their performance over time. Stay current with the latest research and trends in AI/ML and GenAI and evaluate their applicability to business problems. Document models, experiments, and workflows for reproducibility and knowledge sharing. Technical Skill Set Cloud & Infrastructure (AWS) Amazon SageMaker – Model training, tuning, deployment, and MLOps. Amazon Bedrock – Serverless GenAI model access and orchestration. SageMaker JumpStart – pre-trained models and GenAI templates. Prompt engineering and fine-tuning of LLMs using SageMaker or Bedrock. Programming & Scripting Python – Primary language for AI/ML development, data processing, and automation. Education Qualification Bachelor’s degree in engineering with minimum four years of experience in relevant technologies. Desired Characteristics Technical Expertise: GenAI Platforms & Models Familiarity with LLMs: like Claude (Anthropic), LLaMA (Meta), Gemini (Google), Mistral, Falcon Experience with APIs: Amazon Bedrock. Understanding of model types: encoder-decoder, decoder-only, diffusion models Design, develop, and deploy agent-based AI systems that exhibit autonomous decision-making. Integrate Generative AI (LLMs, diffusion models) into real-world applications. Prompt Engineering & Fine-Tuning Prompt design for zero-shot, few-shot, and chain-of-thought reasoning Fine-tuning and parameter-efficient tuning (LoRA, PEFT) Retrieval-Augmented Generation (RAG) design and implementation System Integration & Architecture Event-driven and serverless architectures (e.g., AWS Lambda, EventBridge) Development Frameworks LangChain, LlamaIndex. Vector databases: FAISS, Pinecone, Weaviate, Amazon OpenSearch Langgraph, Langchain Cloud & DevOps AWS (Bedrock, SageMaker, Lambda, S3), Azure (OpenAI, Functions), GCP (Vertex AI) CI/CD pipelines for GenAI workflows Security & Compliance Data privacy and governance (GDPR, HIPAA) Model safety: content filtering, moderation, hallucination control Monitoring & Optimization Model performance tracking (latency, cost, accuracy) Logging and observability (CloudWatch, Prometheus, Grafana) Cost optimization strategies for GenAI inference Collaboration & Business Alignment Working with product, legal, and compliance teams Translating business requirements into GenAI use cases Creating PoCs and scaling to production Business Acumen: Demonstrates the initiative to explore alternate technology and approaches to solving problems Skilled in breaking down problems, documenting problem statements and estimating efforts Has the ability to analyze impact of technology choices Skilled in negotiation to align stakeholders and communicate a single synthesized perspective to the scrum team. Balances value propositions for competing stakeholders. Demonstrates knowledge of the competitive environment Demonstrates knowledge of technologies in the market to help make buy vs build recommendations, scope MVPs, and to drive market timing decisions Leadership: Influences through others; builds direct and "behind the scenes" support for ideas. Pre-emptively sees downstream consequences and effectively tailors influencing strategy to support a positive outcome. Able to verbalize what is behind decisions and downstream implications. Continuously reflecting on success and failures to improve performance and decision-making. Understands when change is needed. Participates in technical strategy planning. Personal Attributes: Able to effectively direct and mentor others in critical thinking skills. Proactively engages with cross-functional teams to resolve issues and design solutions using critical thinking and analysis skills and best practices. Finds important patterns in seemingly unrelated information. Influences and energizes other toward the common vision and goal. Maintains excitement for a process and drives to new directions of meeting the goal even when odds and setbacks render one path impassable. Innovates and integrates new processes and/or technology to significantly add value to GE Healthcare. Identifies how the cost of change weighs against the benefits and advises accordingly. Proactively learns new solutions and processes to address seemingly unanswerable problems. Inclusion and Diversity GE Healthcare is an Equal Opportunity Employer where inclusion matters. Employment decisions are made without regard to race, color, religion, national or ethnic origin, sex, sexual orientation, gender identity or expression, age, disability, protected veteran status or other characteristics protected by law. We expect all employees to live and breathe our behaviors: to act with humility and build trust; lead with transparency; deliver with focus, and drive ownership – always with unyielding integrity. Our total rewards are designed to unlock your ambition by giving you the boost and flexibility you need to turn your ideas into world-changing realities. Our salary and benefits are everything you’d expect from an organization with global strength and scale, and you’ll be surrounded by career opportunities in a culture that fosters care, collaboration and support. #LI-MA6 Additional Information Relocation Assistance Provided: No
Posted 3 weeks ago
3.0 - 5.0 years
0 Lacs
Faridabad, Haryana, India
On-site
Position: Senior AI/ML Engineer- NLP/Python (Please Post on Naukri with the Position as it ) Experience : 3 to 5 Years Location : Mohan Corporate Office (Work from Office Only) Job Type: Full-Time Salary: To be discussed during the interview Key Responsibilities : - Design, develop, and deploy AI/ML models for real-world applications. - Work with NLP, deep learning, and traditional ML algorithms to solve complex business problems. - Develop end-to-end ML pipelines, including data preprocessing, feature engineering, model training, and deployment. - Optimize model performance using hyperparameter tuning and model evaluation techniques. - Implement AI-driven solutions using TensorFlow, PyTorch, Scikit-learn, OpenAI APIs, Hugging Face, and similar frameworks. - Work with structured and unstructured data, performing data wrangling, transformation, and feature extraction. - Deploy models in cloud environments (AWS, Azure, or GCP) using SageMaker, Vertex AI, or Azure ML. - Collaborate with cross-functional teams to integrate AI models into production systems. - Ensure scalability, performance, and efficiency of AI/ML solutions. - Stay updated with emerging AI trends and technologies to drive innovation. Required Skills : - Strong experience in machine learning, deep learning, NLP, and AI model development. - Implement Retrieval-Augmented Generation (RAG) using vector databases - Proficiency in Python, TensorFlow, PyTorch, Scikit-learn, and OpenAI GPT models. - Expertise in NLP techniques (Word2Vec, BERT, transformers, LLMs, text classification). - Hands-on experience with computer vision (CNNs, OpenCV, YOLO, custom object detection models). - Solid understanding of ML model deployment and MLOps (Docker, Kubernetes, CI/CD for ML models). - Experience in working with cloud platforms (AWS, Azure, GCP) for AI/ML model deployment. - Strong knowledge of SQL, NoSQL databases, and big data processing tools (PySpark, Databricks, Hadoop, Kafka, etc. - Familiarity with API development using Django, Flask, or FastAPI for AI solutions. - Strong problem-solving, analytical, and communication skills. Preferred Skills : - Experience with AI-powered chatbots and OpenAI API integration. - Exposure to LLMs (GPT, LLaMA, Falcon, etc.) for real-world applications. - Hands-on experience in generative AI models.
Posted 3 weeks ago
6.0 - 8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Role : Juniper AI SME. Experience : 6 - 8 Years. Start Date : immediate 15 Days. Location : Type : Full-time. Job Summary We are seeking a highly skilled and innovative AI Solutions Specialist to lead the design, development, and deployment of AI-driven solutions tailored to complex business challenges. The ideal candidate will be responsible for building scalable, ethical, and cutting-edge AI/ML systems, collaborating with cross-functional teams to implement intelligent solutions, and leveraging industry best practices in data science, machine learning, and cloud-native architectures. Key Responsibilities Lead the development of Data Engineering, Machine Learning (ML), and AI capabilities across the full solution lifecycle. Collaborate with project, data science, and development teams to define and implement AI/ML technical roadmaps. Engage with stakeholders to identify AI/ML opportunities aligned with business needs. Architect and implement scalable AI solutions integrated with existing systems and infrastructure. Design, train, test, and optimize machine learning models and AI algorithms. Evaluate third-party AI tools, APIs, and platforms for potential integration. Develop technical documentation, solution architecture, and proof-of-concept prototypes. Partner with data engineering teams to ensure data availability, quality, and integrity. Monitor AI models in production, ensuring continuous learning, tuning, and performance improvement. Uphold responsible AI practices, including bias mitigation, explainability, and data privacy. Communicate AI strategy and progress to both technical and business stakeholders. Continuously research and apply the latest AI trends, techniques, and tools. Work within MLOps frameworks and deployment pipelines including Docker, MLflow, and CI/CD. Technical Skills & Tools Expertise in machine learning libraries and frameworks : TensorFlow, PyTorch, Scikit-learn. Juniper AI SMEProficient in programming : Python Familiarity with AI/ML platforms : AWS SageMaker, Azure ML, Google Vertex AI, Databricks, Snowflake Infrastructure & deployment tools : Docker, Kubernetes, Terraform, Ansible, Prometheus, Grafana, ELK Data technologies : Hadoop, Spark, Kafka, SQL, NoSQL, Postgres, Cassandra, Elastic Search. CRM and enterprise systems : Salesforce Experience in data preprocessing, feature engineering, and model evaluation. Exposure to Generative AI, LLMs (e.g., GPT, diffusion models), and deep learning techniques. Experience with natural language processing (NLP), computer vision, or reinforcement learning. Knowledge of Junos OS architecture, including its operational and feature-specific nuances. Preferred Qualifications Publications, patents, or open-source contributions in AI/ML. Proven leadership in building AI systems at scale Excellent analytical, communication, and stakeholder engagement skills Experience. Minimum 6-8 years in data & analytics with strong focus on AI/ML, data platforms, and data engineering Experience in leading architecture and infrastructure for end-to-end AI/ML lifecycle management. Deep understanding of technology trends, architectures, and integration strategies in AI. Hands-on expertise in predictive modeling, NLP, deep learning, and information retrieval. (ref:hirist.tech)
Posted 3 weeks ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Senior Principal Consultant -AWS AI/ML Engineer Responsibilities Design, develop, and deploy scalable AI/ML solutions using AWS services such as Amazon Bedrock, SageMaker, Amazon Q, Amazon Lex, Amazon Connect, and Lambda. Implement and optimize large language model (LLM) applications using Amazon Bedrock, including prompt engineering, fine-tuning, and orchestration for specific business use cases. Build and maintain end-to-end machine learning pipelines using SageMaker for model training, tuning, deployment, and monitoring. Integrate conversational AI and virtual assistants using Amazon Lex and Amazon Connect, with seamless user experiences and real-time inference. Leverage AWS Lambda for event-driven execution of model inference, data preprocessing, and microservices. Design and maintain scalable and secure data pipelines and AI workflows, ensuring efficient data flow to and from Redshift and other AWS data stores. Implement data ingestion, transformation, and model inference for structured and unstructured data using Python and AWS SDKs. Collaborate with data engineers and scientists to support development and deployment of ML models on AWS. Monitor AI/ML applications in production, ensuring optimal performance, low latency, and cost efficiency across all AI/ML services. Ensure implementation of AWS security best practices, including IAM policies, data encryption, and compliance with industry standards. Drive the integration of Amazon Q for enterprise AI-based assistance and automation across internal processes and systems. Participate in architecture reviews and recommend best-fit AWS AI/ML services for evolving business needs. Stay up to date with the latest advancements in AWS AI services, LLMs, and industry trends to inform technology strategy and innovation. Prepare documentation for ML pipelines, model performance reports, and system architecture. Qualifications we seek in you: Minimum Qualifications Proven hands-on experience with Amazon Bedrock, SageMaker, Lex, Connect, Lambda, and Redshift. Strong knowledge and application experience with Large Language Models (LLMs) and prompt engineering techniques. Experience building production-grade AI applications using AWS AI or other generative AI services. Solid programming experience in Python for ML development, data processing, and automation. Proficiency in designing and deploying conversational AI/chatbot solutions using Lex and Connect. Experience with Redshift for data warehousing and analytics integration with ML solutions. Good understanding of AWS architecture, scalability, availability, and security best practices. Familiarity with AWS development, deployment, and monitoring tools (CloudWatch, CodePipeline, etc.). Strong understanding of MLOps practices including model versioning, CI/CD pipelines, and model monitoring. Strong communication and interpersonal skills to collaborate with cross-functional teams and stakeholders. Ability to troubleshoot performance bottlenecks and optimize cloud resources for cost-effectiveness Preferred Qualifications: AWS Certification in Machine Learning, Solutions Architect, or AI Services. Experience with other AI tools (e.g., Anthropic Claude, OpenAI APIs, or Hugging Face). Knowledge of streaming architectures and services like Kafka or Kinesis. Familiarity with Databricks and its integration with AWS services. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
0.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Lead Consultant - AWS AI/ML Engineer Responsibilities Design, develop, and deploy scalable AI/ML solutions using AWS services such as Amazon Bedrock, SageMaker, Amazon Q, Amazon Lex, Amazon Connect, and Lambda. Implement and optimize large language model (LLM) applications using Amazon Bedrock, including prompt engineering, fine-tuning, and orchestration for specific business use cases. Build and maintain end-to-end machine learning pipelines using SageMaker for model training, tuning, deployment, and monitoring. Integrate conversational AI and virtual assistants using Amazon Lex and Amazon Connect, with seamless user experiences and real-time inference. Leverage AWS Lambda for event-driven execution of model inference, data preprocessing, and microservices. Design and maintain scalable and secure data pipelines and AI workflows, ensuring efficient data flow to and from Redshift and other AWS data stores. Implement data ingestion, transformation, and model inference for structured and unstructured data using Python and AWS SDKs. Collaborate with data engineers and scientists to support development and deployment of ML models on AWS. Monitor AI/ML applications in production, ensuring optimal performance, low latency, and cost efficiency across all AI/ML services. Ensure implementation of AWS security best practices, including IAM policies, data encryption, and compliance with industry standards. Drive the integration of Amazon Q for enterprise AI-based assistance and automation across internal processes and systems. Participate in architecture reviews and recommend best-fit AWS AI/ML services for evolving business needs. Stay up to date with the latest advancements in AWS AI services, LLMs, and industry trends to inform technology strategy and innovation. Prepare documentation for ML pipelines, model performance reports, and system architecture. Qualifications we seek in you: Minimum Qualifications Proven hands-on experience with Amazon Bedrock, SageMaker, Lex, Connect, Lambda, and Redshift. Strong knowledge and application experience with Large Language Models (LLMs) and prompt engineering techniques. Experience building production-grade AI applications using AWS AI or other generative AI services. Solid programming experience in Python for ML development, data processing, and automation. Proficiency in designing and deploying conversational AI/chatbot solutions using Lex and Connect. Experience with Redshift for data warehousing and analytics integration with ML solutions. Good understanding of AWS architecture, scalability, availability, and security best practices. Familiarity with AWS development, deployment, and monitoring tools (CloudWatch, CodePipeline , etc.). Strong understanding of MLOps practices including model versioning, CI/CD pipelines, and model monitoring. Strong communication and interpersonal skills to collaborate with cross-functional teams and stakeholders. Ability to troubleshoot performance bottlenecks and optimize cloud resources for cost-effectiveness Preferred Qualifications: AWS Certification in Machine Learning, Solutions Architect, or AI Services. Experience with other AI tools (e.g., Anthropic Claude, OpenAI APIs, or Hugging Face). Knowledge of streaming architectures and services like Kafka or Kinesis. Familiarity with Databricks and its integration with AWS services. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
2.0 - 5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Inviting applications for the role of Senior Principal Consultant - ML Engineers! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities . Develop and maintain CI/CD pipelines for AI/GenAI models on AWS using GitHub Actions and CodePipeline. (Not Limited to) . Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) . Any cloud platform- Azure or AWS . Package and deploy AI/GenAI models on (SageMaker, Lambda, API Gateway). . Write Python scripts for automation, deployment, and monitoring. . Engaging in the design, development and maintenance of data pipelines for various AI use cases . Active contribution to key deliverables as part of an agile development team . Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). . Ensure model governance, versioning, and traceability across environments. . Collaborating with others to source, analyse, test and deploy data processes . Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. . Degree/qualification in Computer Science or a related field, or equivalent work experience . Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. . Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills . Experience with Docker-based deployments. . Exposure to model monitoring tools (Evidently, CloudWatch). . Familiarity with RAG stacks or fine-tuning LLMs. . Understanding of GitOps practices. . Knowledge of governance and compliance policies, standards, and procedures . . . . .
Posted 3 weeks ago
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Consultant, Senio r Data Scientist ! In this role, you will have a strong background in Gen AI implementations, data engineering, developing ETL processes, and utilizing machine learning tools to extract insights and drive business decisions. The Data Scientist will be responsible for analysing large datasets, developing predictive models, and communicating findings to various stakeholders Responsibilities Develop and maintain machine learning models to identify patterns and trends in large datasets. Utilize Gen AI and various LLMs to design & develop production ready use cases. Collaborate with cross-functional teams to identify business problems and develop data-driven solutions. Communicate complex data findings and insights to non-technical stakeholders in a clear and concise manner. Continuously monitor and improve the performance of existing models and processes. Stay up to date with industry trends and advancements in data science and machine learning. Design and implement data models and ETL processes to extract, transform, and load data from various sources. Good hands own experience in AWS bedrock models, Sage maker, Lamda etc Data Exploration & Preparation - Conduct exploratory data analysis and clean large datasets for modeling. Business Strategy & Decision Making - Translate data insights into actionable business strategies. Mentor Junior Data Scientists - Provide guidance and expertise to junior team members. Collaborate with Cross-Functional Teams - Work with engineers, product managers, and stakeholders to align data solutions with business goals. Qualifications we seek in you! Minimum Qualifications Bachelor%27s / Master%27s degree in computer science , Statistics, Mathematics, or a related field. Relevant years of experience in a data science or analytics role. Strong proficiency in SQL and experience with data warehousing and ETL processes. Experience with programming languages such as Python & R is a must . (either one ) Familiarity with machine learning tools and libraries such as Pandas, scikit-learn and AI libraries. Having excellent knowledge in Gen AI, RAG, LLM Models & strong understanding of prompt engineering. Proficiency in Az Open AI & AWS Sagemaker implementation. Good understanding statistical techniques such and advanced machine learning Experience with data warehousing and ETL processes. Proficiency in SQL and database management. Familiarity with cloud-based data platforms such as AWS, Azure, or Google Cloud. Experience with Azure ML Studio is desirable. Knowledge of different machine learning algorithms and their applications. Familiarity with data preprocessing and feature engineering techniques. Preferred Qualifications/ Skills Experience with model evaluation and performance metrics. Understanding of deep learning and neural networks is a plus. Certified in AWS Machine learning , AWS Infra engineer is a plus Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position Overview Job Title: AI Lead , VP Location: Pune, India Role Description Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Work includes: Planning and developing entire engineering solutions to accomplish business goals Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow Reviewing engineering plans and quality to drive re-use and improve engineering capability Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank. Deutsche Bank’s Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Proven ability to design, build, and deploy end-to-end AI/ML systems in production. Expertise in data science, including statistical analysis, experimentation, and data storytelling. Experienced in working with large-scale, real-world datasets for model training and analysis. Comfortable navigating urgency, ambiguity, and fast-changing priorities. Skilled at solving complex ML problems independently, from idea to implementation. Strong leadership experience building and guiding high-performing AI teams. Hands-on with deep learning, NLP, LLMs, and classical ML techniques. Fluent in model experimentation, tuning, optimisation, and evaluation at scale. Solid software engineering background and comfort working with data and full-stack teams. Experience with cloud platforms (GCP, AWS, Azure) and production-grade ML pipelines. Bias for action — willing to jump into code, data, or ops to ship impactful AI products. Your Skills And Experience PhD in Computer Science, Data Science, Machine Learning, AI, or a related field. Strong programming skills in Python (preferred), with experience in Java, Scala, or Go a plus. Deep expertise with ML frameworks like TensorFlow, PyTorch, Scikit-learn. Experience with large-scale datasets, distributed data processing (e.g. Spark, Beam, Airflow). Solid foundation in data science: statistical modeling, A/B testing, time series, and experimentation. Proficient in NLP, deep learning, and working with transformer-based LLMs. Experience with MLOps practices — CI/CD, model serving, monitoring, and lifecycle management. Hands-on with cloud platforms (GCP, AWS, Azure) and tools like Vertex AI, SageMaker, or Databricks. Strong grasp of API design, system integration, and delivering AI features in full-stack products. Comfortable working with SQL/NoSQL, and data warehouses like BigQuery or Snowflake. Familiar with ethical AI, model explainability, and secure, privacy-aware AI development How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 3 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
An Amazing Career Opportunity for AI/ML Engineer Location: Chennai, India (Hybrid) Job ID: 39477 Profile Summary: We are committed to leveraging Artificial Intelligence to develop innovative solutions that enhance the experiences of both our internal and external customers. Our objective is to harness our internal knowledge base and integrate it seamlessly with business applications, enabling the delivery of intelligent, user-centric digital experiences. We are seeking a highly skilled AI/ML Engineer with a passion for continuous learning and staying ahead of emerging technologies. The ideal candidate will be instrumental in designing, developing, and scaling AI-powered solutions that address dynamic business challenges and evolving customer expectations. About HID Global HID Global powers the trusted identities of the world’s people, places and things. We make it possible for people to transact safely, work productively and travel freely. Our trusted identity solutions give people secure and convenient access to physical and digital places and connect things that can be accurately identified, verified and tracked digitally. Millions of people around the world use HID products and services to navigate their everyday lives, and over 2 billion things are connected through HID. We work with governments, educational institutions, hospitals, financial institutions, industrial businesses, and some of the most innovative companies on the planet. Headquartered in Austin, Texas, HID Global has over 4500 employees worldwide and operates international offices that support more than 100 countries. HID Global® is an ASSA ABLOY Group brand. HID Global has is the trusted source for secure identity solutions for millions of customers and users around the world. In India, we have two Engineering Centre (Bangalore and Chennai). Global Engineering Team is based in Chennai and one of the Business Unit Engineering team is based in Bangalore. Check us out: www.hidglobal.com and https://youtu.be/23km5H4K9Eo LinkedIn: www.linkedin.com/company/hidglobal/mycompany/ Physical Access Control Solutions (PACS): HID Physical Access Control Solutions (PACS) is at the forefront of securing spaces with advanced, reliable access control solutions. From cutting-edge readers, credentials and controllers to mobile and biometric technologies, HID PACS empowers organizations worldwide to protect their people, property and assets with scalable, high-quality solutions. This is more than just a job – it’s your chance to join an industry leader to drive innovation in access control and make a real impact on global security solutions. Are you ready to make a difference? Join us and help shape the future of security. Are You Ready to Join the Team? Our company is committed to finding the best and the brightest talent to help us reach the top. If you are a dynamic, highly skilled, experienced Cloud engineer and technology enthusiast, and you enjoy working in a rapid pace within a rapidly growing business environment, then you will want to consider this position. If you excel at communication, collaboration, and unrelenting innovation, we want to talk to you. And if you bring dedication, positive energy and integrity to the table, you just might be the right fit for our team. Qualifications To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodation may be made to enable individuals with disabilities to perform the essential functions. Roles & Responsibilities (Other Duties May Be Assigned) Design, develop, and deploy robust & scalable AI/ML models in Production environments. Collaborate with business stakeholders to identify AI/ML opportunities and define measurable success metrics. Lead the complete software development lifecycle, from initial concept and requirements gathering to deployment and post-release support. Manage end-to-end development of software and applications, ensuring alignment with business and user needs. Design, develop, analyze, and deploy high-quality software solutions in a timely manner. Implement automated testing strategies and provide structured feedback to stakeholders throughout the development process. Integrate Multimodal Conversational AI platforms (MCP) including voice, vision, and text to deliver rich user interactions. Maintain and enhance existing software systems by diagnosing issues, implementing improvements, and verifying changes through testing. Ensure ongoing software performance, stability, and scalability through regular updates and upgrades post-deployment. Work closely with data engineers, software developers, and domain experts to integrate AI into existing systems. Conduct AI/ML training sessions for internal teams to improve AI literacy within the organization. Strong analytical and problem-solving mindset. Technical Requirements: Front-End Development: Proficiency in JavaScript/TypeScript, HTML5, CSS3, and experience with modern front-end frameworks and libraries such as Svelte or React. Proficiency in Python and hands-on experience in using ML frameworks (tensorflow, pytorch, scikit-learn, xgboost etc) Cloud-based AI/ML experience (AWS Sagemaker, Azure ML, GCP Vertex AI, etc.). Server-Side Development: Strong working knowledge of server-side programming languages, including Python and Node.js. Databases and Caching: Familiarity with relational and non-relational database systems such as SQL Server, Oracle, MySQL, and MongoDB, along with experience in caching technologies like Redis and Memcached. Solid understanding of AI/ML life cycle – Data preprocessing, feature engineering, model selection, training, validation and deployment. Artificial Intelligence / LLM: Exposure to AI and Large Language Model development using tools such as LangChain, prompt engineering techniques, and Retrieval-Augmented Generation (RAG). DevOps and CI/CD: Experience with DevOps practices, including tools and processes for continuous integration and continuous deployment (CI/CD). Security: Solid understanding of security best practices and principles related to web application development. Experience and/or Education Qualification: Graduation or master’s in computer science or information technology or AI/ML 3+ years of hands-on experience in AI/ML development/deployment and optimization Experience in LLM (RAG, prompt engineering etc) would be preferred Why apply? Empowerment: You’ll work as part of a global team in a flexible work environment, learning and enhancing your expertise. We welcome an opportunity to meet you and learn about your unique talents, skills, and experiences. You don’t need to check all the boxes. If you have most of the skills and experience, we want you to apply. Innovation: You embrace challenges and want to drive change. We are open to ideas, including flexible work arrangements, job sharing or part-time job seekers. Integrity: You are results-orientated, reliable, and straightforward and value being treated accordingly. We want all our employees to be themselves to feel appreciated and accepted. This opportunity may be open to flexible working arrangements. HID is an Equal Opportunity/Affirmative Action Employer – Minority/Female/Disability/Veteran/Gender Identity/Sexual Orientation. We make it easier for people to get where they want to go! On an average day, think of how many times you tap, twist, tag, push or swipe to get access, find information, connect with others or track something. HID technology is behind billions of interactions, in more than 100 countries. We help you create a verified, trusted identity that can get you where you need to go – without having to think about it. When you join our HID team, you’ll also be part of the ASSA ABLOY Group, the global leader in access solutions. You’ll have 63,000 colleagues in more than 70 different countries. We empower our people to build their career around their aspirations and our ambitions – supporting them with regular feedback, training, and development opportunities. Our colleagues think broadly about where they can make the most impact, and we encourage them to grow their role locally, regionally, or even internationally. As we welcome new people on board, it’s important to us to have diverse, inclusive teams, and we value different perspectives and experiences.
Posted 3 weeks ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
An Amazing Career Opportunity for AI/ML Engineer Location: Chennai, India (Hybrid) Job ID: 39582 Position Summary A rewarding career at HID Global beckons you! We are looking for an AI/ML Engineer , who is responsible for designing, developing, and deploying advanced AI/ML solutions to solve complex business challenges. This role requires expertise in machine learning, deep learning, MLOps, and AI model optimization , with a focus on building scalable, high-performance AI systems. As an AI/ML Engineer , you will work closely with data engineers, software developers, and business stakeholders to integrate AI-driven insights into real-world applications. You will be responsible for model development, system architecture, cloud deployment, and ensuring responsible AI adoption . We are a leading company in the trusted source for innovative HID Global Human Resources products, solutions and services that help millions of customers around the globe create, manage and use secure identities. Who are we? HID powers the trusted identities of the world’s people, places, and things, allowing people to transact safely, work productively and travel freely. We are a high-tech software company headquartered in Austin, TX, with over 4,000 worldwide employees. Check us out: www.hidglobal.com and https://youtu.be/23km5H4K9Eo LinkedIn: www.linkedin.com/company/hidglobal/mycompany/ About HID Global, Chennai HID Global powers the trusted identities of the world’s people, places and things. We make it possible for people to transact safely, work productively and travel freely. Our trusted identity solutions give people secure and convenient access to physical and digital places and connect things that can be accurately identified, verified and tracked digitally. Millions of people around the world use HID products and services to navigate their everyday lives, and over 2 billion things are connected through HID technology. We work with governments, educational institutions, hospitals, financial institutions, industrial businesses and some of the most innovative companies on the planet. Headquartered in Austin, Texas, HID Global has over 3,000 employees worldwide and operates international offices that support more than 100 countries. HID Global® is an ASSA ABLOY Group brand. For more information, visit www.hidglobal.com . HID Global has is the trusted source for secure identity solutions for millions of customers and users around the world. In India, we have two Engineering Centre (Bangalore and Chennai) over 200+ Engineering Staff. Global Engineering Team is based in Chennai and one of the Business Unit Engineering team is based in Bangalore. Physical Access Control Solutions (PACS) HID's Physical Access Control Solutions Business Area: HID PAC’s Business Unit focuses on the growth of new clients and existing clients where we leverage the latest card and reader technologies to solve the security challenges of our clients. Other areas of focus will include authentication, card sub systems, card encoding, Biometrics, location services and all other aspects of a physical access control infrastructure. Qualifications:- To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Roles & Responsibilities: Design, develop, and deploy robust & scalable AI/ML models in Production environments. Collaborate with business stakeholders to identify AI/ML opportunities and define measurable success metrics. Design and build Retrieval-Augmented Generation (RAG) pipelines integrating vector stores, semantic search, and document parsing for domain-specific knowledge retrieval. Integrate Multimodal Conversational AI platforms (MCP) including voice, vision, and text to deliver rich user interactions. Drive innovation through PoCs, benchmarking, and experiments with emerging models and architectures. Optimize models for performance, latency and scalability. Build data pipelines and workflows to support model training and evaluation. Conduct research & experimentation on the state-of-the-art techniques (DL, NLP, Time series, CV) Partner with MLOps and DevOps teams to implement best practices in model monitoring, version and re-training. Lead code reviews, architecture discussions and mentor junior & peer engineers. Architect and implement end-to-end AI/ML pipelines, ensuring scalability and efficiency. Deploy models in cloud-based (AWS, Azure, GCP) or on-premises environments using tools like Docker, Kubernetes, TensorFlow Serving, or ONNX Ensure data integrity, quality, and preprocessing best practices for AI/ML model development. Ensure compliance with AI ethics guidelines, data privacy laws (GDPR, CCPA), and corporate AI governance. Work closely with data engineers, software developers, and domain experts to integrate AI into existing systems. Conduct AI/ML training sessions for internal teams to improve AI literacy within the organization. Strong analytical and problem solving mindset. Technical Requirements: Strong expertise in AI/ML engineering and software development. Strong experience with RAG architecture, vector databases Proficiency in Python and hands-on experience in using ML frameworks (tensorflow, pytorch, scikit-learn, xgboost etc) Familiarity with MCPs like Google Dialogflow, Rasa, Amazon Lex, or custom-built agents using LLM orchestration. Cloud-based AI/ML experience (AWS Sagemaker, Azure ML, GCP Vertex AI, etc.). Solid understanding of AI/ML life cycle – Data preprocessing, feature engineering, model selection, training, validation and deployment. Experience in production grade ML systems (Model serving, APIs, Pipelines) Familiarity with Data engineering tools (SPARK, Kafka, Airflow etc) Strong knowledge of statistical modeling, NLP, CV, Recommendation systems, Anomaly detection and time series forecasting. Hands-on in Software engineering with knowledge of version control, testing & CI/CD Hands-on experience in deploying ML models in production using Docker, Kubernetes, TensorFlow Serving, ONNX, and MLflow. Experience in MLOps & CI/CD for ML pipelines, including monitoring, retraining, and model drift detection. Proficiency in scaling AI solutions in cloud environments (AWS, Azure & GCP). Experience in data preprocessing, feature engineering, and dimensionality reduction. Exposure to Data privacy, Compliance and Secure ML practices Education and/or Experience: Graduation or master’s in computer science or information technology or AI/ML/Data science 3+ years of hands-on experience in AI/ML development/deployment and optimization Experience in leading AI/ML teams and mentoring junior engineers. Why apply? Empowerment: You’ll work as part of a global team in a flexible work environment, learning and enhancing your expertise. We welcome an opportunity to meet you and learn about your unique talents, skills, and experiences. You don’t need to check all the boxes. If you have most of the skills and experience, we want you to apply. Innovation: You embrace challenges and want to drive change. We are open to ideas, including flexible work arrangements, job sharing or part-time job seekers. Integrity: You are results-orientated, reliable, and straightforward and value being treated accordingly. We want all our employees to be themselves, to feel appreciated and accepted. This opportunity may be open to flexible working arrangements. HID is an Equal Opportunity/Affirmative Action Employer – Minority/Female/Disability/Veteran/Gender Identity/Sexual Orientation. We make it easier for people to get where they want to go! On an average day, think of how many times you tap, twist, tag, push or swipe to get access, find information, connect with others or track something. HID technology is behind billions of interactions, in more than 100 countries. We help you create a verified, trusted identity that can get you where you need to go – without having to think about it. When you join our HID team, you’ll also be part of the ASSA ABLOY Group, the global leader in access solutions. You’ll have 63,000 colleagues in more than 70 different countries. We empower our people to build their career around their aspirations and our ambitions – supporting them with regular feedback, training, and development opportunities. Our colleagues think broadly about where they can make the most impact, and we encourage them to grow their role locally, regionally, or even internationally. As we welcome new people on board, it’s important to us to have diverse, inclusive teams, and we value different perspectives and experiences.
Posted 3 weeks ago
3.0 - 5.0 years
5 - 7 Lacs
Mohali
Hybrid
We are seeking a forward-thinking AI Architect to design, lead, and scale enterprise-grade AI systems and solutions across domains. This role demands deep expertise in machine learning, generative AI, data engineering, cloud-native architecture, and orchestration frameworks. You will collaborate with cross-functional teams to translate business requirements into intelligent, production-ready AI solutions. Key Responsibilities: Architecture & Strategy Design end-to-end AI architectures that include data pipelines, model development, MLOps, and inference serving. Create scalable, reusable, and modular AI components for different use cases (vision, NLP, time series, etc.). Drive architecture decisions across AI solutions, including multi-modal models, LLMs, and agentic workflows. Ensure interoperability of AI systems across cloud (AWS/GCP/Azure), edge, and hybrid environments. Technical Leadership Guide teams in selecting appropriate models (traditional ML, deep learning, transformers, etc.) and technologies. Lead architectural reviews and ensure compliance with security, performance, and governance policies. Mentor engineering and data science teams in best practices for AI/ML, GenAI, and MLOps. Model Lifecycle & Engineering Oversee implementation of model lifecycle using CI/CD for ML (MLOps) and/or LLMOps workflows. Define architecture for Retrieval Augmented Generation (RAG), vector databases, embeddings, prompt engineering, etc. Design pipelines for fine-tuning, evaluation, monitoring, and retraining of models. Data & Infrastructure Collaborate with data engineers to ensure data quality, feature pipelines, and scalable data stores. Architect systems for synthetic data generation, augmentation, and real-time streaming inputs. Define solutions leveraging data lakes, data warehouses, and graph databases. Client Engagement / Product Integration Interface with business/product stakeholders to align AI strategy with KPIs. Collaborate with DevOps teams to integrate models into products via APIs/microservices. Required Skills & Experience: Core Skills Strong foundation in AI/ML/DL (Scikit-learn, TensorFlow, PyTorch, Transformers, Langchain, etc.) Advanced knowledge of Generative AI (LLMs, diffusion models, multimodal models, etc.) Proficiency in cloud-native architectures (AWS/GCP/Azure) and containerization (Docker, Kubernetes) Experience with orchestration frameworks (Airflow, Ray, LangGraph, or similar) Familiarity with vector databases (Weaviate, Pinecone, FAISS), LLMOps platforms, and RAG design Architecture & Programming Solid experience in architectural patterns (microservices, event-driven, serverless) Proficient in Python and optionally Java/Go Knowledge of APIs (REST, GraphQL), streaming (Kafka), and observability tooling (Prometheus, ELK, Grafana) Tools & Platforms ML lifecycle tools: MLflow, Kubeflow, Vertex AI, Sagemaker, Hugging Face, etc. Prompt orchestration tools: LangChain, CrewAI, Semantic Kernel, DSPy (nice to have) Knowledge of security, privacy, and compliance (GDPR, SOC2, HIPAA, etc.)
Posted 3 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Job description Experian is looking for a skilled Machine Learning Engineer to join our team of machine learning engineers in the Personalization and AI Services division. The ideal candidate will have expertise in Python, AI, machine learning, and AWS. You will design, develop, and deploy machine learning models to drive personalized experiences and AI-driven services. This role involves providing GEN AI Product capabilities, including Large Language Models (LLM) as a service, Knowledge Management Systems, Market Insights APIs, and Predictive Analytics APIs. Key Responsibilities Develop machine learning models and algorithms for personalized experiences and AI-driven services. Optimize machine learning pipelines using Python and relevant libraries (e.g., TensorFlow, PyTorch, scikit-learn). Deploy and manage models on AWS infrastructure. Provide GEN AI Product capabilities across teams, including LLM as a service, Knowledge Management Systems, Market Insights APIs, and Predictive Analytics APIs. Utilize AWS AI and ML services such as Amazon SageMaker, AWS Deep Learning AMIs, AWS Deep Learning Containers, AWS AI services (e.g., Amazon Rekognition, Amazon Comprehend, Amazon Lex), and Amazon Bedrock. Collaborate with data scientists, software engineers, and stakeholders to integrate solutions into production. Conduct data analysis and preprocessing to ensure high-quality input. Monitor and improve model performance based on feedback and new data. Stay updated with advancements in AI and machine learning technologies. Qualifications Qualifications AWS Machine learning Gen AI AWS Sagemaker AWS Bedrock Python Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together Find out what its like to work for Experian by clicking here
Posted 3 weeks ago
9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: AI/ML engineer Location: Gurgaon Hybrid Experience: 4–9 Years Job Type: [Full-time ] Job Summary: We are seeking a highly skilled and motivated AI/ML Engineer with strong AWS Cloud experience to join our data science and engineering team. The ideal candidate will design, build, and deploy scalable machine learning models and solutions while leveraging AWS services to manage infrastructure and model workflows. Key Responsibilities: Design, develop, and deploy machine learning models for predictive analytics, classification, NLP, or computer vision tasks. Use AWS services like SageMaker, S3, Lambda, Glue, EC2, EKS, and Step Functions for ML workflows and deployments. Preprocess and analyze large datasets for training and inference. Build scalable data pipelines and automate model training and evaluation. Collaborate with data engineers, scientists, and DevOps teams to productionize models. Optimize models for performance, interpretability, and scalability. Implement MLOps best practices (versioning, monitoring, model retraining pipelines). Conduct A/B testing and model performance tuning. Key Skills and Qualifications: Technical Skills: Strong proficiency in Python (including pandas, NumPy, Scikit-learn, etc.) Solid understanding of ML algorithms (regression, classification, clustering, deep learning) Experience with TensorFlow, PyTorch, or Keras Hands-on experience with AWS cloud services : SageMaker , S3 , Glue , Lambda , Step Functions , CloudWatch , IAM , etc. Experience with MLOps tools : MLflow, Docker, Git, CI/CD pipelines Knowledge of data pipeline frameworks (Airflow, AWS Glue, etc.) Familiarity with SQL and data wrangling in distributed environments (e.g., Spark) Nice to Have: Experience with NLP, LLMs, or Computer Vision Exposure to Big Data technologies (Kafka, Hadoop, etc.) Familiarity with API development using Flask or FastAPI Knowledge of Kubernetes for model container orchestration Educational Qualifications: Bachelor’s or Master’s in Computer Science, Data Science, Statistics, or related field.
Posted 3 weeks ago
0 years
6 - 9 Lacs
Hyderābād
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Consultant, Senio r Data Scientist ! In this role, you will have a strong background in Gen AI implementations, data engineering, developing ETL processes, and utilizing machine learning tools to extract insights and drive business decisions. The Data Scientist will be responsible for analysing large datasets, developing predictive models, and communicating findings to various stakeholders Responsibilities Develop and maintain machine learning models to identify patterns and trends in large datasets. Utilize Gen AI and various LLMs to design & develop production ready use cases. Collaborate with cross-functional teams to identify business problems and develop data-driven solutions. Communicate complex data findings and insights to non-technical stakeholders in a clear and concise manner. Continuously monitor and improve the performance of existing models and processes. Stay up to date with industry trends and advancements in data science and machine learning. Design and implement data models and ETL processes to extract, transform, and load data from various sources. Good hands own experience in AWS bedrock models, Sage maker, Lamda etc Data Exploration & Preparation – Conduct exploratory data analysis and clean large datasets for modeling. Business Strategy & Decision Making – Translate data insights into actionable business strategies. Mentor Junior Data Scientists – Provide guidance and expertise to junior team members. Collaborate with Cross-Functional Teams – Work with engineers, product managers, and stakeholders to align data solutions with business goals. Qualifications we seek in you! Minimum Qualifications Bachelor's / Master's degree in computer science , Statistics, Mathematics, or a related field. Relevant years of experience in a data science or analytics role. Strong proficiency in SQL and experience with data warehousing and ETL processes. Experience with programming languages such as Python & R is a must . (either one ) Familiarity with machine learning tools and libraries such as Pandas, scikit-learn and AI libraries. Having excellent knowledge in Gen AI, RAG, LLM Models & strong understanding of prompt engineering. Proficiency in Az Open AI & AWS Sagemaker implementation. Good understanding statistical techniques such and advanced machine learning Experience with data warehousing and ETL processes. Proficiency in SQL and database management. Familiarity with cloud-based data platforms such as AWS, Azure, or Google Cloud. Experience with Azure ML Studio is desirable. Knowledge of different machine learning algorithms and their applications. Familiarity with data preprocessing and feature engineering techniques. Preferred Qualifications/ Skills Experience with model evaluation and performance metrics. Understanding of deep learning and neural networks is a plus. Certified in AWS Machine learning , AWS Infra engineer is a plus Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 7, 2025, 7:46:19 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 3 weeks ago
8.0 years
0 Lacs
Hyderābād
On-site
We are looking for a Staff Engineer to lead the design, development, and optimization of AI-powered platforms with a strong focus on Python , API development , and AWS AI services . You will be instrumental in shaping system architecture, mentoring engineers, and driving end-to-end solutions that leverage NLP , cloud services , and modern frontend frameworks . As a Staff Engineer, you'll be a key technical leader partnering closely with product, design, and engineering teams to build scalable and intelligent systems. Key Responsibilities: Architect and build scalable, high-performance backend systems using Python Design robust RESTful APIs and guide the engineering team on best practices for performance and security Leverage AWS AI/ML services (e.g., Comprehend, Lex, SageMaker) to build intelligent features and capabilities Provide technical leadership on NLP solutions using libraries such as spaCy , transformers , or NLTK Ensure comprehensive unit testing across APIs and databases; advocate for clean, testable code Guide the development of full-stack features involving JavaScript , React , and Next.js Own and evolve system architecture, ensuring modularity, scalability, and resilience Promote strong engineering practices with Git , Bitbucket , and CI/CD tooling Collaborate cross-functionally to drive technical decisions aligned with product goals Mentor engineers across levels and foster a culture of technical excellence Technical Requirements: 8+ years of hands-on software development experience, primarily in Python Proven expertise in API development , system design, and performance tuning Strong background in AWS , particularly AI/ML and NLP services Experience building intelligent features using NLP frameworks Proficiency in front-end technologies : JavaScript, React, Next.js (preferred) Solid understanding of RDBMS (PostgreSQL, MySQL or similar) Expert in version control systems and collaborative workflows Track record of technical leadership, mentoring, and architectural ownership Preferred Qualifications: Experience with microservices , event-driven architectures , or serverless systems Familiarity with Docker , Kubernetes , and infrastructure-as-code tools Prior experience in leading cross-functional engineering initiatives
Posted 3 weeks ago
8.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Title: Data Science Manager Job Summary: We are looking for a seasoned Data Science Manager with a minimum of 8 years of experience in Data Science and Machine Learning to lead our AI team. The ideal candidate will have a strong background in NLP, Transformers, Generative AI, Responsible AI and MLOps, as well as experience in deploying AI solutions across various cloud platforms. This role requires a combination of technical acumen, leadership skills, and a strategic mindset to drive AI initiatives that align with our business objectives. Key Responsibilities: Leadership: Direct and support the AI team in developing and implementing advanced AI/ML solutions, ensuring projects align with the company's strategic goals. Project Management: Oversee the entire lifecycle of AI projects, from ideation to deployment, prioritizing tasks, and managing resources to meet deadlines and deliverables. Technical Mastery: Utilize expertise in Python, NLP, Transformers, and Generative AI to lead technical innovation and maintain high standards of excellence within the team. Cloud and MLOps Integration: Leverage cloud platforms such as AWS for deployment and incorporate MLOps practices to enhance model development, deployment, and monitoring. System Architecture: Architect scalable, reliable, and high-performing AI systems that integrate seamlessly with existing infrastructure. Stakeholder Collaboration: Engage with stakeholders to convert business challenges into analytical questions, fostering a data-driven culture and driving impactful solutions. Team Development: Mentor team members, promote professional growth, and cultivate a culture of innovation and continuous improvement. Ethical AI Implementation: Uphold ethical standards in AI practices, ensuring models are fair, transparent, and accountable. Effective Communication: Develop and deliver clear, concise presentations to communicate complex technical details to diverse audiences, facilitating understanding and buy-in. Requirements: Degree in Computer Science, Engineering, or a related technical field at the bachelor's or master's level; candidates with a Ph.D. will be given preference. At least 8 years of professional experience in the realm of Data Science and Machine Learning, including a leadership role with a documented history of success. Expertise in Python programming and experience with AI/ML libraries such as TensorFlow or PyTorch. Comprehensive understanding of Natural Language Processing (NLP) and hands-on experience with Transformer architectures. Proficient in leveraging AWS cloud platforms for the deployment of AI-driven solutions. Manage AWS cloud infrastructure and services including S3, EC2, Lambda, RDS, Glue, Sagemaker, EMR, and Step Functions to build and deploy scalable machine learning models. Solid grasp of MLOps concepts and practical experience with tools such as Docker, Kubernetes, and Git for operational efficiency. Capable of architecting and executing CI/CD workflows and managing infrastructure as code using tools like Terraform or AWS CloudFormation. Exceptional problem-solving abilities, analytical mindset, and effective communication skills. Established expertise with a minimum of 8 years in a technical capacity concentrating on AI/ML applications. Deep knowledge of AI/ML frameworks, NLP techniques, and Transformer-based model implementations. Proficiency in MLOps methodologies, including familiarity with LLMOps. Experience in developing APIs using frameworks such as FastAPI. Understanding of container orchestration using AWS EKS and familiarity with infrastructure frameworks like Bedrock. Oversee ethical AI practices, ensuring fairness, accountability, and transparency in AI systems. Expert in AI governance, implementing and monitoring responsible AI frameworks. Manage risk and compliance with regulatory and ethical standards for AI deployment. Champion ethical decision-making in AI development and operational processes. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 weeks ago
4.0 years
8 - 12 Lacs
India
On-site
Job Description We are seeking a skilled and passionate Machine Learning Engineer or AI Model Developer with a minimum of 4 years of hands-on experience in building, training, and deploying custom machine learning models. The ideal candidate is someone who thrives on solving real-world problems using custom-built AI models, rather than relying solely on pre-built solutions or third-party APIs. Natural Abilities Smart, self motivated, responsible and out of the box thinker. Detailed oriented and powerful analyzer. Great writing and communication skills. Requirements: • 4+ years of experience designing, developing, and deploying custom machine learning models (not just integrating APIs) Strong proficiency in Python and ML libraries such as NumPy, pandas, scikit-learn, etc. Expertise in ML frameworks like TensorFlow, PyTorch, Keras, or equivalent. Solid understanding of ML algorithms, model evaluation techniques, and feature engineering. Experience in data preprocessing, model optimization, and hyperparameter tuning. Hands-on experience with real-world dataset training and fine-tuning. Experience in using Amazon SageMaker for model development, training, deployment, and monitoring. Familiarity with other cloud-based ML platforms (AWS, GCP, or Azure) is a plus. Responsibilities: • Design, develop, and deploy custom machine learning models tailored to business use cases. Train, validate, and optimize models using real-world datasets and advanced techniques. Build scalable, production-ready ML pipelines from data ingestion to deployment. Leverage AWS SageMaker to streamline model training, testing, and deployment workflows. Work closely with product and engineering teams to integrate models into applications. Evaluate models using appropriate metrics and continuously improve performance. Maintain proper documentation of experiments, workflows, and outcomes. Stay up to date with the latest ML research, tools, and best practices. Job Types: Full-time, Permanent Pay: ₹800,000.00 - ₹1,200,000.00 per year Schedule: Day shift Monday to Friday Experience: ML/DS: 5 years (Required) Location: Adajan, Surat, Gujarat (Preferred) Work Location: In person Expected Start Date: 20/07/2025
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough