Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Test Lead with AI Skills1 Key Responsibilities Test Strategy & Leadership: Define end-to-end test plans for AI solutions (OCR, NLP, document automation) including functional, regression, UAT, and performance testing. Lead a team of QA engineers in Agile/Scrum environments. AI Product Testing Validate OCR accuracy (Google/Azure OCR), AI model outputs (Layout Parsing, data extraction), and NLP logic across diverse document types (invoices, contracts). Design tests for edge cases: low-quality scans, handwritten text, multi-language docs. Automation & Tooling Develop/maintain automated test scripts (Selenium, Cypress, pytest) for UI, API, and data validation. Integrate testing into CI/CD pipelines (Azure DevOps/Jenkins). Quality Advocacy Collaborate with AI engineers and BAs to identify risks, document defects, and ensure resolution. Report on test metrics (defect density, false positives, model drift). Client-Focused Validation Lead on-site/client UAT sessions for Professional Services deployments. Ensure solutions meet client SLAs (e.g., >95% extraction accuracy). Required Skills & Experience Experience: 8+ years in software testing, including 3+ years testing AI/ML products (OCR, NLP, computer vision). Proven experience as a Test Lead managing teams (5+ members). Technical Expertise: Manual Testing: Deep understanding of AI testing nuances (training data bias, model drift, confidence scores). Test Automation: Proficiency in Python/Java, Selenium/Cypress, and API testing (Postman/RestAssured). AI Tools: Hands-on experience with Azure AI, Google Vision OCR, or similar. Databases: SQL/NoSQL (MongoDB) validation for data pipelines. Process & Methodology: Agile/Scrum, test planning, defect tracking (JIRA), and performance testing (JMeter/Locust). Knowledge of MLOps/testing practices for AI models. Preferred Qualifications Experience with document-intensive domains (P2P, AP, insurance). Certifications: ISTQB Advanced, AWS/Azure QA, or AI testing certifications. Familiarity with GenAI testing (LLM validation, hallucination checks). Knowledge of containerization (Docker/Kubernetes). Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Test Lead with AI Skills1 Key Responsibilities Test Strategy & Leadership: Define end-to-end test plans for AI solutions (OCR, NLP, document automation) including functional, regression, UAT, and performance testing. Lead a team of QA engineers in Agile/Scrum environments. AI Product Testing Validate OCR accuracy (Google/Azure OCR), AI model outputs (Layout Parsing, data extraction), and NLP logic across diverse document types (invoices, contracts). Design tests for edge cases: low-quality scans, handwritten text, multi-language docs. Automation & Tooling Develop/maintain automated test scripts (Selenium, Cypress, pytest) for UI, API, and data validation. Integrate testing into CI/CD pipelines (Azure DevOps/Jenkins). Quality Advocacy Collaborate with AI engineers and BAs to identify risks, document defects, and ensure resolution. Report on test metrics (defect density, false positives, model drift). Client-Focused Validation Lead on-site/client UAT sessions for Professional Services deployments. Ensure solutions meet client SLAs (e.g., >95% extraction accuracy). Required Skills & Experience Experience: 8+ years in software testing, including 3+ years testing AI/ML products (OCR, NLP, computer vision). Proven experience as a Test Lead managing teams (5+ members). Technical Expertise: Manual Testing: Deep understanding of AI testing nuances (training data bias, model drift, confidence scores). Test Automation: Proficiency in Python/Java, Selenium/Cypress, and API testing (Postman/RestAssured). AI Tools: Hands-on experience with Azure AI, Google Vision OCR, or similar. Databases: SQL/NoSQL (MongoDB) validation for data pipelines. Process & Methodology: Agile/Scrum, test planning, defect tracking (JIRA), and performance testing (JMeter/Locust). Knowledge of MLOps/testing practices for AI models. Preferred Qualifications Experience with document-intensive domains (P2P, AP, insurance). Certifications: ISTQB Advanced, AWS/Azure QA, or AI testing certifications. Familiarity with GenAI testing (LLM validation, hallucination checks). Knowledge of containerization (Docker/Kubernetes). Show more Show less
Posted 1 month ago
3.0 - 4.0 years
4 - 6 Lacs
Ahmedabad
On-site
We is looking for a skilled Machine Learning Engineer with hands-on experience deploying models on Google Cloud Platform (GCP) using Vertex AI. This role involves enabling real-time and batch model inferencing based on specific business requirements, with a strong focus on production-grade ML deployments. Experience: 3 to 4 years Key Responsibilities: * Deploy machine learning models on GCP using Vertex AI. * Design and implement real-time and batch inference pipelines. * Monitor model performance, detect drift, and manage lifecycle. * Ensure adherence to model governance best practices and support ML-Ops workflows. * Collaborate with cross-functional teams to support Credit Risk, Marketing, and Customer Service use cases, especially within the retail banking domain. * Develop scalable and maintainable code in Python and SQL. * Work with diverse datasets, perform feature engineering, and build, train, and fine-tune advanced predictive models. * Contribute to model deployment in the lending space. Required Skills & Experience: * Strong expertise in Python and SQL. * Proficient with ML libraries and frameworks such as scikit-learn, pandas, NumPy, spaCy, CatBoost, etc. * In-depth knowledge of GCP Vertex AI and ML pipeline orchestration. * Experience with ML-Ops and model governance. * Exposure to use cases in retail banking—Credit Risk, Marketing, and Customer Service. * Experience working with structured and unstructured data. Nice to Have: * Prior experience deploying models in the lending domain. * Understanding of regulatory considerations in financial services. Job Type: Full-time Schedule: Day shift Work Location: In person
Posted 1 month ago
5.0 years
0 Lacs
India
On-site
We are seeking a highly skilled and experienced Senior AI Engineer to lead the design, development, and deployment of cutting-edge AI/ML solutions across our products and platforms. As a senior member of the team, you will be responsible for solving complex problems using AI technologies, mentoring junior engineers, and collaborating cross-functionally with product, data, and engineering teams to deliver scalable and impactful AI-driven features. Key Responsibilities: Design and develop advanced machine learning models and deep learning architectures for real-world problems. Lead AI/ML system design, including data pipelines, model training workflows, evaluation strategies, and deployment frameworks. Conduct research and experimentation with new algorithms, tools, and technologies to enhance the company's AI capabilities. Collaborate with product managers, data scientists, and software engineers to translate business requirements into AI solutions. Mentor and guide junior AI engineers and data scientists, fostering best practices in coding, experimentation, and model deployment. Optimize and monitor model performance in production, including addressing issues of bias, drift, and latency. Stay current with AI research, tools, and industry trends, and bring innovative ideas into the team. Contribute to technical strategy and help define the roadmap for AI initiatives. Requirements Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, or a related field (Ph.D. preferred). 5+ years of experience in AI/ML engineering with a strong portfolio of production-level projects. Deep knowledge of machine learning and deep learning algorithms (e.g., CNNs, RNNs, Transformers, LLMs). Proficient in Python and libraries such as TensorFlow, PyTorch, Scikit-learn, and Hugging Face. Strong experience with MLOps tools (e.g., MLflow, Kubeflow, Airflow) and cloud platforms (e.g., AWS, GCP, Azure). Experience in deploying and maintaining models in production at scale. Strong understanding of data structures, algorithms, and software engineering principles. Excellent problem-solving skills and communication abilities. Show more Show less
Posted 1 month ago
1.0 years
0 Lacs
India
Remote
Who We Are We're Redis. We built the product that runs the fast apps our world runs on. (If you checked the weather, used your credit card, or looked at your flight status online today, you’re welcome.) At Redis, you’ll work with the fastest, simplest technology in the business—whether you’re building it, telling its story, or selling it to our 10,000+ worldwide customers. We’re creating a faster world with simpler experiences. You in? In this remote-based position, you will play a key role in our India SDR/Sales teams by prospecting into predefined named accounts as well as following up on some inbound leads to create a quality pipeline of Sales Qualified Opportunities. You will be working in close partnership with our sales teams, SDRs globally and our marketing team. This position requires strong business acumen with the ability to quickly assess and align customer interest to products and use cases. Our ideal candidate must possess the aptitude to easily engage and establish rapport with prospects, colleagues cross functionally and most of all be hungry, creative and driven to generate a quality pipeline of SQLs. This is a position of growth and learning in the areas of sales and marketing; being coachable is a must! You will join the Corporate SDR team and be a member of the APAC Sales Team, working alongside Account Executives and reporting directly to the APAC SDR Manager. This is a rare opportunity to help define a growing sales team in a well-funded SaaS startup with great customer traction. What You’ll Do Identify and nurture leads to generate opportunities. Find and cultivate a list of contacts and promote basic discovery to obtain meeting commitment that results in building a quality pipeline for our Sales Team. Conduct Outbound engagements (combo prospecting email+phone+social) to ideal customer profiles Execute established best practices and committed engagement cadences on time and on message, using Outreach, Salesforce, Drift and other tools Data Enrichment and research of MQLs using resources including SFDC and LinkedIn, ZoomInfo. Determine first-call qualification on defined inbound MQLs Manage and deliver on activities respective to KPIs Hand off qualified opportunities to Account Executives Stay updated on industry trends to tailor outreach for maximum impact. What Will You Need To Have You have minimum of 1 year of experience in an Inside Sales/ SDR team ideally in cloud solution selling. Experience of managing India market. Prior experience with databases, infrastructure software, or SaaS offerings Passion for technology and excitement about the future! Excellent communication verbally and in writing. You like to win and you play both tough and fair to get there. You understand what it means to compete as part of a team You must be based in India We Give Back To Our Employees Our culture is what makes Redis a fun and rewarding place to work. To support you at work and beyond, we offer all our team members fantastic benefits and perks: Competitive salaries and equity grants Flexible vacation time to promote a healthy work-life balance Flexible working options Yearly health and wellness budget for a healthy mind and body Frequent team celebrations and recreation events Learning and development opportunities Ability to influence a high-performance company on its way to IPO As a global company, we value a culture of curiosity, diversity of thought, and innovation from our employees, customers, and partners. Redis is committed to a diverse and inclusive work environment where all employees’ differences are celebrated and supported, and everyone feels safe to bring their authentic selves to work. Redis is dedicated to equal employment opportunities regardless of race, color, ancestry, religion, sex, national orientation, sexual orientation, age, marital status, disability, gender identity, gender expression, Veteran status, or any other classification protected by federal, state, or local law. We strive to create a workplace where every voice is heard, and every idea is respected. Redis is committed to working with and providing access and reasonable accommodation to applicants with mental and/or physical disabilities. If you think you may require accommodations for any part of the recruitment process, please send a request to recruiting@redis.com. All requests for accommodations are treated discreetly and confidentially, as practical and permitted by law. Redis reserves the right to retain data longer than stated in the privacy policy in order to evaluate candidates. Show more Show less
Posted 1 month ago
1.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Job Title: ABM & Personalization Executive (Entry-Level) Location: [Noida / Hybrid / Remote] Experience: 0–1 year (Internship or project-based exposure acceptable) Department: Marketing / Growth About the Role We're looking for a digitally curious and execution-focused professional to join our growth team. If you're someone who loves automating smart outreach, researching target accounts, and crafting personalized experiences using tools like Clay, Drift, LinkedIn, and outbound automation —we'd love to hear from you! 🔧 Key Responsibilities Assist in identifying and enriching target accounts using Clay, LinkedIn, and other data tools Create personalized messaging for email and LinkedIn outreach using AI or internal templates Support multi-touch ABM campaigns using tools like Apollo, Instantly, or HubSpot Coordinate with the sales team to align messaging and track account engagement Manage Drift chatbot logic for high-value website visitors and help optimize conversations Continuously update CRM records and campaign spreadsheets with engagement data Analyze campaign performance (open rates, clicks, reply rates, chat conversions) and suggest optimizations ⚙️ Tools You Might Work With (Prior knowledge is a bonus, not mandatory. We will train the right candidate.) Clay (for data enrichment and personalization) Drift or other chat-based tools (for real-time engagement) Apollo / Instantly / Smartlead (for outbound email campaigns) HubSpot / Zoho CRM / Salesforce LinkedIn Sales Navigator, Zapier, ChatGPT 🎓 Who Should Apply Fresh graduates or early professionals with strong interest in B2B marketing or growth roles Experience in internships, live projects, or side hustles related to B2B growth/lead gen is a plus Analytical, curious, and detail-oriented mindset Excellent written communication skills and a knack for personalization Eagerness to learn no-code tools and marketing automation platforms 🌱 What You'll Gain Hands-on exposure to modern, tech-enabled ABM workflows Learning opportunities across sales, marketing, automation, and AI Direct mentorship from growth leaders Fast-paced growth environment with ownership and creativity Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a Senior Data Quality Engineer to join our innovative team, where you will drive excellence in database testing, performance optimization, and test automation frameworks. You will leverage advanced Python scripting and database expertise to ensure data integrity and optimize SQL transactions for scalability while working within cutting-edge AI/ML-driven environments. Responsibilities Develop robust Python-based test frameworks for SQL validation, ETL verification, and stored procedure unit testing Automate data-driven testing with tools like pytest, Hypothesis, pandas, and tSQLt Implement AI/ML models for detecting anomalous behaviors in SQL transactions and for test case generation to cover edge scenarios Train Machine Learning models to predict slow queries and optimize database performance through indexing strategies Validate stored procedures, triggers, views, and business rules for consistency and accuracy Apply performance benchmarking with JMeter, SQLAlchemy, and AI-driven anomaly detection methods Conduct data drift detection to analyze and compare staging vs production environments Automate database schema validations using tools such as Liquibase or Flyway in CI/CD workflows Integrate Python test scripts into CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) Design mock database environments to support automated regression testing for complex architectures Collaborate with cross-functional teams to develop scalable and efficient data quality solutions Requirements 5+ years of working experience in data quality engineering or similar roles Proficiency in SQL Server, T-SQL, stored procedures, indexing, and execution plans with a strong foundation in query performance tuning and optimization strategies Background in ETL validation, data reconciliation, and business logic testing for complex datasets Skills in Python programming for test automation, data validation, and anomaly detection with hands-on expertise in pytest, pandas, NumPy, and SQLAlchemy Familiarity with frameworks like Great for developing comprehensive validation processes Competency in integrating automated test scripts into CI/CD environments such as Jenkins, GitHub Actions, and Azure DevOps Showcase of tools like Liquibase or Flyway for schema validation and database migration testing Understanding of implementing AI/ML-driven methods for database testing and optimization Nice to have Knowledge of JMeter or similar performance testing tools for SQL benchmarking Background in AI-based techniques for detecting data drift or training predictive models Expertise in mock database design for highly scalable architectures Familiarity with handling dynamic edge case testing using AI-based test case generation Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
We are seeking a Senior Data Quality Engineer to join our innovative team, where you will drive excellence in database testing, performance optimization, and test automation frameworks. You will leverage advanced Python scripting and database expertise to ensure data integrity and optimize SQL transactions for scalability while working within cutting-edge AI/ML-driven environments. Responsibilities Develop robust Python-based test frameworks for SQL validation, ETL verification, and stored procedure unit testing Automate data-driven testing with tools like pytest, Hypothesis, pandas, and tSQLt Implement AI/ML models for detecting anomalous behaviors in SQL transactions and for test case generation to cover edge scenarios Train Machine Learning models to predict slow queries and optimize database performance through indexing strategies Validate stored procedures, triggers, views, and business rules for consistency and accuracy Apply performance benchmarking with JMeter, SQLAlchemy, and AI-driven anomaly detection methods Conduct data drift detection to analyze and compare staging vs production environments Automate database schema validations using tools such as Liquibase or Flyway in CI/CD workflows Integrate Python test scripts into CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) Design mock database environments to support automated regression testing for complex architectures Collaborate with cross-functional teams to develop scalable and efficient data quality solutions Requirements 5+ years of working experience in data quality engineering or similar roles Proficiency in SQL Server, T-SQL, stored procedures, indexing, and execution plans with a strong foundation in query performance tuning and optimization strategies Background in ETL validation, data reconciliation, and business logic testing for complex datasets Skills in Python programming for test automation, data validation, and anomaly detection with hands-on expertise in pytest, pandas, NumPy, and SQLAlchemy Familiarity with frameworks like Great for developing comprehensive validation processes Competency in integrating automated test scripts into CI/CD environments such as Jenkins, GitHub Actions, and Azure DevOps Showcase of tools like Liquibase or Flyway for schema validation and database migration testing Understanding of implementing AI/ML-driven methods for database testing and optimization Nice to have Knowledge of JMeter or similar performance testing tools for SQL benchmarking Background in AI-based techniques for detecting data drift or training predictive models Expertise in mock database design for highly scalable architectures Familiarity with handling dynamic edge case testing using AI-based test case generation Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Responsibilities: Evaluate and source appropriate cloud infrastructure solutions for machine learning needs, ensuring cost-effectiveness and scalability based on project requirements. Automate and manage the deployment of machine learning models into production environments, ensuring version control for models and datasets using tools like Docker and Kubernetes. Set up monitoring tools to track model performance and data drift, conduct regular maintenance, and implement updates for production models. Work closely with data scientists, software engineers, and stakeholders to align on project goals, facilitate knowledge sharing, and communicate findings and updates to cross-functional teams. Design, implement, and maintain scalable ML infrastructure, optimizing cloud and on-premise resources for training and inference. Document ML processes, pipelines, and best practices while preparing reports on model performance, resource utilization, and system issues. Provide training and support for team members on ML Ops tools and methodologies, and stay updated on industry trends and emerging technologies. Diagnose and resolve issues related to model performance, infrastructure, and data quality, implementing solutions to enhance model robustness and reliability. Education, Technical Skills & Other Critical Requirement: 10+ years of relevant experience in AI/ analytics product & solution delivery Bachelor’s/master’s degree in an information technology/computer science/ Engineering or equivalent fields experience. Proficiency in frameworks such as TensorFlow, PyTorch, or Scikit-learn. Strong skills in Python and/or R; familiarity with Java, Scala, or Go is a plus. Experience with cloud services such as AWS, Azure, or Google Cloud Platform, particularly in ML services (e.g., AWS SageMaker, Azure ML). CI/CD tools (e.g., Jenkins, GitLab CI), containerization (e.g., Docker), and orchestration (e.g., Kubernetes). Experience with databases (SQL and NoSQL), data pipelines, ETL processes, ML pipeline orchestration (Airflow) Familiarity with monitoring and logging tools such as Prometheus, Grafana, or ELK stack. Proficient in using Git for version control. Strong analytical and troubleshooting abilities to diagnose and resolve issues effectively. Good communication skills for working with cross-functional teams and conveying technical concepts to non-technical stakeholders. Ability to manage multiple projects and prioritize tasks in a fast-paced environment. Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
🚨 We are Hiring 🚨 https://grhombustech.com/jobs/job-description-senior-test-automation-lead-playwright-ai-ml-focus/ Job Description Job Title: Senior Test Automation Lead – Playwright (AI/ML Focus) Location: Hyderabad Experience: 10 - 12 years Job Type: Full-Time Company Overview: GRhombus Technologies Pvt Ltd, a pioneer in Software Solutions – Especially on Test Automation, Cyber Security, Full Stack Development, DevOps, Salesforce, Performance Testing and Manual Testing. GRhombus delivery centres are located in India at Hyderabad, Chennai, Bengaluru and Pune. In the Middle East, we are located in Dubai. Our partner offices are located in the USA and the Netherlands. About the Role: We are seeking a passionate and technically skilled Senior Test Automation Lead with deep experience in Playwright-based frameworks and a solid understanding of AI/ML-driven applications. In this role, you will lead the automation strategy and quality engineering practices for next-generation AI products that integrate large-scale machine learning models, data pipelines, and dynamic, intelligent UIs. You will define, architect, and implement scalable automation solutions across AI-enhanced features such as recommendation engines, conversational UIs, real-time analytics, and predictive workflows, ensuring both functional correctness and intelligent behavior consistency. Key Responsibilities: Test Automation Framework Design & Implementation Design and implement robust, modular, and extensible Playwright automation frameworks using TypeScript/JavaScript. Define automation design patterns and utilities that can handle complex AI-driven UI behaviors (e.g., dynamic content, personalization, chat interfaces). Implement abstraction layers for easy test data handling, reusable components, and multi-browser/platform execution. AI/ML-Specific Testing Strategy Partner with Data Scientists and ML Engineers to understand model behaviors, inference workflows, and output formats. Develop strategies for testing non-deterministic model outputs (e.g., chat responses, classification labels) using tolerance ranges, confidence intervals, or golden datasets. Design tests to validate ML integration points: REST/gRPC API calls, feature flags, model versioning, and output accuracy. Include bias, fairness, and edge-case validations in test suites where applicable (e.g., fairness in recommendation engines or NLP sentiment analysis). End-to-End Test Coverage Lead the implementation of end-to-end automation for: Web interfaces (React, Angular, or other SPA frameworks) Backend services (REST, GraphQL, WebSockets) ML model integration endpoints (real-time inference APIs, batch pipelines) Build test utilities for mocking, stubbing, and simulating AI inputs and datasets. CI/CD & Tooling Integration Integrate automation suites into CI/CD pipelines using GitHub Actions, Jenkins, GitLab CI, or similar. Configure parallel execution, containerized test environments (e.g., Docker), and test artifact management. Establish real-time dashboards and historical reporting using tools like Allure, ReportPortal, TestRail, or custom Grafana integrations. Quality Engineering & Leadership Define KPIs and QA metrics for AI/ML product quality: functional accuracy, model regression rates, test coverage %, time-to-feedback, etc. Lead and mentor a team of automation and QA engineers across multiple projects. Act as the Quality Champion across the AI platform by influencing engineering, product, and data science teams on quality ownership and testing best practices. Agile & Cross-Functional Collaboration Work in Agile/Scrum teams; participate in backlog grooming, sprint planning, and retrospectives. Collaborate across disciplines: Frontend, Backend, DevOps, MLOps, and Product Management to ensure complete testability. Review feature specs, AI/ML model update notes, and data schemas for impact analysis. Required Skills and Qualifications: Technical Skills: Strong hands-on expertise with Playwright (TypeScript/JavaScript). Experience building custom automation frameworks and utilities from scratch. Proficiency in testing AI/ML-integrated applications: inference endpoints, personalization engines, chatbots, or predictive dashboards. Solid knowledge of HTTP protocols, API testing (Postman, Supertest, RestAssured). Familiarity with MLOps and model lifecycle management (e.g., via MLflow, SageMaker, Vertex AI). Experience in testing data pipelines (ETL, streaming, batch), synthetic data generation, and test data versioning. Domain Knowledge: Exposure to NLP, CV, recommendation engines, time-series forecasting, or tabular ML models. Understanding of key ML metrics (precision, recall, F1-score, AUC), model drift, and concept drift. Knowledge of bias/fairness auditing, especially in UI/UX contexts where AI decisions are shown to users. Leadership & Communication: Proven experience leading QA/Automation teams (4+ engineers). Strong documentation, code review, and stakeholder communication skills. Experience collaborating in Agile/SAFe environments with cross-functional teams. Preferred Qualifications: Experience with AI Explainability frameworks like LIME, SHAP, or What-If Tool. Familiarity with Test Data Management platforms (e.g., Tonic.ai, Delphix) for ML training/inference data. Background in performance and load testing for AI systems using tools like Locust, JMeter, or k6. Experience with GraphQL, Kafka, or event-driven architecture testing. QA Certifications (ISTQB, Certified Selenium Engineer) or cloud certifications (AWS, GCP, Azure). Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related technical discipline. Bonus for certifications or formal training in Machine Learning, Data Science, or MLOps. Why Join Us? At GRhombus, we are redefining quality assurance and software testing with cutting-edge methodologies and a commitment to innovation. As a test automation lead, you will play a pivotal role in shaping the future of automated testing, optimizing frameworks, and driving efficiency across our engineering ecosystem. Be part of a workplace that values experimentation, learning, and professional growth. Contribute to an organisation where your ideas drive innovation and make a tangible impact. Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
India
Remote
About the Role We are hiring an AI/ML Engineer based in India. You will help design, develop, optimize, and deploy a multimodal AI models for eye disease screening using image and tabular/textual data. You will collaborate closely with AI researchers, engineers, and product teams to build and translate cutting-edge models. This is an opportunity to work at the frontier of clinical AI with purpose-driven colleagues and powerful social impact. Job Type: Full-Time, Remote Start Date: Immediate Compensation: 35-50 lakh INR, commensurate with experience Responsibilities Design and implement multimodal deep learning models combining image encoders and language models. Train, fine-tune, and optimize models using annotated eye images and structured clinical data. Implement instruction-tuned outputs for diagnosis, referral decisions, and patient counseling in English, Hindi, and Tamil. Optimize inference performance using quantization, pruning, and model distillation for deployment on smartphones or edge devices. Work with mobile and backend engineers to integrate models with our telemedicine app and cloud-based infrastructure. Contribute to model evaluation across clinical sites using real-world patient data to measure accuracy, bias, and latency. Support development of responsible AI pipelines: privacy, bias mitigation, versioning, and drift detection. Qualifications Must-Have: Bachelor’s or Master’s degree in Computer Science, AI, Biomedical Engineering, or related field. 4+ years of experience with deep learning framework (Master’s degree can substitute for experience). Hands-on experience training or fine-tuning transformer models (e.g., LLaMA, T5, GPT). Hands-on experience working with vision models (CNNs, ViTs) Fluency with version control (Git), collaborative workflows, and cloud-based development. Passion for using AI in global health or social impact domains. Preferred: Experience with multimodal fusion techniques (cross-attention, late fusion, MLP). Experience implementing or optimizing Retrieval-Augmented Generation (RAG) pipelines for domain-specific applications (e.g., medical QA, knowledge-grounded generation). Experience working with multilingual NLP/NLG (especially Hindi and Tamil). Prior work with model optimization for edge deployment using ONNX, Core ML, TensorFlow Lite, or quantized PyTorch models, or knowledge of hybrid cloud/on-device design. Proven ability to mentor junior engineers and foster a culture of technical growth and collaboration. Strong written and verbal communication skills with cross-functional stakeholders (e.g., product, clinical, design) Why Join Us Work on a mission-driven project with real clinical impact across underserved communities in India. Flexibility to work remotely while contributing to a globally recognized AI/health project. ( Note : Must be available for 2–3 regularly scheduled Zoom meetings per week and one in-person week per year.) Join a startup-minded team with stable long-term partnerships and funding. Apply here: https://www.visilant.org/careers/aiml-engineer-india-job-post Note: Applications through LinkedIn will not be reviewed. About Visilant: Visilant is a digital health social enterprise spun out of Johns Hopkins University. Visilant builds smartphone-based imaging, telemedicine, and artificial intelligence to empower non-eye care specialists to screen patients leading causes of blindness. Visilant has already screened over 30,000 patients in partnership with the largest eye care systems in India Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Title: MLOps Engineer Location: [Insert Location – e.g., Gurugram / Remote / On-site] Experience: 2–5 years Type: Full-Time Key Responsibilities: Design, develop, and maintain end-to-end MLOps pipelines for seamless deployment and monitoring of ML models. Implement and manage CI/CD workflows using modern tools (e.g., GitHub Actions, Azure DevOps, Jenkins). Orchestrate ML services using Kubernetes for scalable and reliable deployments. Develop and maintain FastAPI-based microservices to serve machine learning models via RESTful APIs. Collaborate with data scientists and ML engineers to productionize models in Azure and AWS cloud environments. Automate infrastructure provisioning and configuration using Infrastructure-as-Code (IaC) tools. Ensure observability, logging, monitoring, and model drift detection in deployed solutions. Required Skills: Strong proficiency in Kubernetes for container orchestration. Experience with CI/CD pipelines and tools like Jenkins, GitHub Actions, or Azure DevOps. Hands-on experience with FastAPI for developing ML-serving APIs. Proficient in deploying ML workflows on Azure and AWS . Knowledge of containerization (Docker optional, if used during local development). Familiarity with model versioning, reproducibility, and experiment tracking tools (e.g., MLflow, DVC). Strong scripting skills (Python, Bash). Preferred Qualifications: B.Tech/M.Tech in Computer Science, Data Engineering, or related fields. Experience with Terraform, Helm, or other IaC tools. Understanding of DevOps practices and security in ML workflows. Good communication skills and a collaborative mindset. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a Senior Data Quality Engineer to join our innovative team, where you will drive excellence in database testing, performance optimization, and test automation frameworks. You will leverage advanced Python scripting and database expertise to ensure data integrity and optimize SQL transactions for scalability while working within cutting-edge AI/ML-driven environments. Responsibilities Develop robust Python-based test frameworks for SQL validation, ETL verification, and stored procedure unit testing Automate data-driven testing with tools like pytest, Hypothesis, pandas, and tSQLt Implement AI/ML models for detecting anomalous behaviors in SQL transactions and for test case generation to cover edge scenarios Train Machine Learning models to predict slow queries and optimize database performance through indexing strategies Validate stored procedures, triggers, views, and business rules for consistency and accuracy Apply performance benchmarking with JMeter, SQLAlchemy, and AI-driven anomaly detection methods Conduct data drift detection to analyze and compare staging vs production environments Automate database schema validations using tools such as Liquibase or Flyway in CI/CD workflows Integrate Python test scripts into CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) Design mock database environments to support automated regression testing for complex architectures Collaborate with cross-functional teams to develop scalable and efficient data quality solutions Requirements 5+ years of working experience in data quality engineering or similar roles Proficiency in SQL Server, T-SQL, stored procedures, indexing, and execution plans with a strong foundation in query performance tuning and optimization strategies Background in ETL validation, data reconciliation, and business logic testing for complex datasets Skills in Python programming for test automation, data validation, and anomaly detection with hands-on expertise in pytest, pandas, NumPy, and SQLAlchemy Familiarity with frameworks like Great for developing comprehensive validation processes Competency in integrating automated test scripts into CI/CD environments such as Jenkins, GitHub Actions, and Azure DevOps Showcase of tools like Liquibase or Flyway for schema validation and database migration testing Understanding of implementing AI/ML-driven methods for database testing and optimization Nice to have Knowledge of JMeter or similar performance testing tools for SQL benchmarking Background in AI-based techniques for detecting data drift or training predictive models Expertise in mock database design for highly scalable architectures Familiarity with handling dynamic edge case testing using AI-based test case generation Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a Senior Data Quality Engineer to join our innovative team, where you will drive excellence in database testing, performance optimization, and test automation frameworks. You will leverage advanced Python scripting and database expertise to ensure data integrity and optimize SQL transactions for scalability while working within cutting-edge AI/ML-driven environments. Responsibilities Develop robust Python-based test frameworks for SQL validation, ETL verification, and stored procedure unit testing Automate data-driven testing with tools like pytest, Hypothesis, pandas, and tSQLt Implement AI/ML models for detecting anomalous behaviors in SQL transactions and for test case generation to cover edge scenarios Train Machine Learning models to predict slow queries and optimize database performance through indexing strategies Validate stored procedures, triggers, views, and business rules for consistency and accuracy Apply performance benchmarking with JMeter, SQLAlchemy, and AI-driven anomaly detection methods Conduct data drift detection to analyze and compare staging vs production environments Automate database schema validations using tools such as Liquibase or Flyway in CI/CD workflows Integrate Python test scripts into CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) Design mock database environments to support automated regression testing for complex architectures Collaborate with cross-functional teams to develop scalable and efficient data quality solutions Requirements 5+ years of working experience in data quality engineering or similar roles Proficiency in SQL Server, T-SQL, stored procedures, indexing, and execution plans with a strong foundation in query performance tuning and optimization strategies Background in ETL validation, data reconciliation, and business logic testing for complex datasets Skills in Python programming for test automation, data validation, and anomaly detection with hands-on expertise in pytest, pandas, NumPy, and SQLAlchemy Familiarity with frameworks like Great for developing comprehensive validation processes Competency in integrating automated test scripts into CI/CD environments such as Jenkins, GitHub Actions, and Azure DevOps Showcase of tools like Liquibase or Flyway for schema validation and database migration testing Understanding of implementing AI/ML-driven methods for database testing and optimization Nice to have Knowledge of JMeter or similar performance testing tools for SQL benchmarking Background in AI-based techniques for detecting data drift or training predictive models Expertise in mock database design for highly scalable architectures Familiarity with handling dynamic edge case testing using AI-based test case generation Show more Show less
Posted 1 month ago
20.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Staff AI Engineer - MLOps Company: Rapid7 Team: AI Center of Excellence Team Overview: Cross-functional team of Data Scientists and AI Engineers Mission: Leverage AI/ML to protect customer attack surfaces Partners with Detection and Response teams, including MDR Encourages creativity, collaboration, and research publication Uses 20+ years of threat analysis and growing patent portfolio Tech Stack: Cloud/Infra: AWS (SageMaker, Bedrock), EKS, Terraform Languages/Tools: Python, Jupyter, NumPy, Pandas, Scikit-learn ML Focus: Anomaly detection, unlabeled data Role Summary: Build and deploy ML production systems Manage end-to-end data pipelines and ensure data quality Implement ML guardrails and robust monitoring Deploy web apps and REST APIs with strong data security Share knowledge, mentor engineers, collaborate cross-functionally Embrace agile, iterative development Requirements: 8–12 years in Software Engineering (3+ in ML deployment on AWS) Strong in Python, Flask/FastAPI, API development Skilled in CI/CD, Docker, Kubernetes, MLOps, cloud AI tools Experience in data pre-processing, feature engineering, model monitoring Strong communication and documentation skills Collaborative mindset, growth-oriented problem-solving Preferred Qualifications: Experience with Java Background in the security industry Familiarity with AI/ML model operations, LLM experimentation Knowledge of model risk management (drift monitoring, hyperparameter tuning, registries) About Rapid7: Rapid7 is committed to securing the digital world through passion, collaboration, and innovation. With over 10,000 customers globally, it offers a dynamic, growth-focused workplace and tackles major cybersecurity challenges with diverse teams and a mission-driven approach. 4o Show more Show less
Posted 1 month ago
8.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
About Mindtickle’s AI/ML Engineering Team Mindtickle is a revenue productivity solution that helps revenue teams enhance their performance by identifying areas for improvement for each team member, recommending appropriate remedial actions, and providing opportunities to implement those recommendations. The charter of the CoE-ML team is to enhance Mindtickle’s solution offerings-such as embedding artificial intelligence in the form of CoPilots, developing hyper-realistic AI-powered role plays, and enabling the automatic curation of collateral while also improving Mindtickle’s internal operations. This includes optimizing workflows, accelerating business processes, and making information more easily discoverable. We work on cutting-edge technologies to drive innovation and deliver advanced AI solutions. We maintain high-quality evaluation standards and continuous improvement practices to ensure our AI features meet stringent performance and reliability criteria. Role Overview As an SDE-3 in AI/ML, you will: Translate business asks and requirements into technical requirements, solutions, architectures, and implementations. Define clear problem statements and technical requirements by aligning business goals with AI research objectives. Lead the end-to-end design, prototyping, and implementation of AI systems, ensuring they meet performance, scalability, and reliability targets. Architect solutions for GenAI and LLM integrations, including prompt engineering, context management, and agentic workflows. Develop and maintain production-grade code with high test coverage and robust CI/CD pipelines on AWS, Kubernetes, and cloud-native infrastructures. Establish and maintain post-deployment monitoring, performance testing, and alerting frameworks to ensure performance and quality SLAs are met. Conduct thorough design and code reviews, uphold best practices, and drive technical excellence across the team. Mentor and guide junior engineers and interns, fostering a culture of continuous learning and innovation. Collaborate closely with product management, QA, data engineering, DevOps, and customer facing teams to deliver cohesive AI-powered product features. Key Responsibilities Problem Definition & Requirements Translate business use cases into detailed AI/ML problem statements and success metrics. Gather and document functional and non-functional requirements, ensuring traceability throughout the development lifecycle. Architecture & Prototyping Design end-to-end architectures for GenAI and LLM solutions, including context orchestration, memory modules, and tool integrations. Build rapid prototypes to validate feasibility, iterate on model choices, and benchmark different frameworks and vendors. Development & Productionization Write clean, maintainable code in Python, Java, or Go, following software engineering best practices. Implement automated testing (unit, integration, and performance tests) and CI/CD pipelines for seamless deployments. Optimize model inference performance and scale services using containerization (Docker) and orchestration (Kubernetes). Post-Deployment Monitoring Define and implement monitoring dashboards and alerting for model drift, latency, and throughput. Conduct regular performance tuning and cost analysis to maintain operational efficiency. Mentorship & Collaboration Mentor SDE-1/SDE-2 engineers and interns, providing technical guidance and career development support. Lead design discussions, pair-programming sessions, and brown-bag talks on emerging AI/ML topics. Work cross-functionally with product, QA, data engineering, and DevOps to align on delivery timelines and quality goals. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 8+ years of professional software development experience, with at least 3 years focused on AI/ML systems. Proven track record of architecting and deploying production AI applications at scale. Strong programming skills in Python and one or more of Java, Go, or C++. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and containerized deployments. Deep understanding of machine learning algorithms, LLM architectures, and prompt engineering. Expertise in CI/CD, automated testing frameworks, and MLOps best practices. Excellent written and verbal communication skills, with the ability to distill complex AI concepts for diverse audiences. Preferred Experience Prior experience building Agentic AI or multi-step workflow systems (using tools like Langgrah, CrewAI or similar). Familiarity with open-source LLMs (e.g., Hugging Face hosted) and custom fine-tuning. Familiarity with ASR (Speech to Text) and TTS (Text to Speech), and other multi-modal systems. Experience with monitoring and observability tools (e.g. Datadog, Prometheus, Grafana). Publications or patents in AI/ML or related conference presentations. Knowledge of GenAI evaluation frameworks (e.g., Weights & Biases, CometML). Proven experience designing, implementing, and rigorously testing AI-driven voice agents - integrating with platforms such as Google Dialogflow, Amazon Lex, and Twilio Autopilot - and ensuring high performance and reliability. What We Offer Opportunity to work at the forefront of GenAI, LLMs, and Agentic AI in a fast-growing SaaS environment. Collaborative, inclusive culture focused on innovation, continuous learning, and professional growth. Competitive compensation, comprehensive benefits, and equity options. Flexible work arrangements and support for professional development. Show more Show less
Posted 1 month ago
5.0 - 6.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Experience: 5-6 years Key Responsibilities Process, analyze, and interpret time-series data from MEMS sensors (e.g., accelerometers, gyroscopes, pressure sensors). Develop and apply statistical methods to identify trends, anomalies, and key performance metrics. Compute and optimize KPIs related to sensor performance, reliability, and drift analysis. Utilize MATLAB toolboxes (e.g., Data Cleaner, Ground Truth Labeler) or Python libraries for data validation, annotation, and anomaly detection. Clean, preprocess, and visualize large datasets to uncover actionable insights. Collaborate with hardware engineers, software developers, and product owners to support end-to-end data workflows. Convert and format data into standardized schemas for use in data pipelines and simulations. Generate automated reports and build dashboards using Power BI or Tableau. Document methodologies, processes, and findings in clear and concise technical reports. Required Qualifications Proficiency in Python or MATLAB for data analysis, visualization, and reporting. Strong foundation in time-series analysis , signal processing, and statistical modeling (e.g., autocorrelation, moving averages, seasonal decomposition). Experience working with MEMS sensors and sensor data acquisition systems. Hands-on experience with pandas, NumPy, SciPy, scikit-learn, and matplotlib . Ability to develop automated KPI reports and interactive dashboards (Power BI or Tableau). Preferred Qualifications Prior experience with data from smartphones, hearables, or wearable devices . Advanced knowledge in MEMS sensor data wrangling techniques. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud Platform. Exposure to real-time data streaming and processing frameworks/toolboxes. Show more Show less
Posted 1 month ago
5.0 years
8 - 10 Lacs
Thiruvananthapuram
Remote
5 - 7 Years 1 Opening Kochi, Trivandrum Role description Job Title: Lead ML-Ops Engineer – GenAI & Scalable ML Systems Location: Any UST Job Type: Full-Time Experience Level: Senior / Lead Role Overview: We are seeking a Lead ML-Ops Engineer to spearhead the end-to-end operationalization of machine learning and Generative AI models across our platforms. You will play a pivotal role in building robust, scalable ML pipelines, embedding responsible AI governance, and integrating innovative GenAI techniques—such as Retrieval-Augmented Generation (RAG) and LLM-based applications —into real-world systems. You will collaborate with cross-functional teams of data scientists, data engineers, product managers, and business stakeholders to ensure AI solutions are production-ready, resilient, and aligned with strategic business goals. A strong background in Dataiku or similar platforms is highly preferred. Key Responsibilities: Model Development & Deployment Design, implement, and manage scalable ML pipelines using CI/CD practices. Operationalize ML and GenAI models, ensuring high availability, observability, and reliability. Automate data and model validation, versioning, and monitoring processes. Technical Leadership & Mentorship Act as a thought leader and mentor to junior engineers and data scientists on ML-Ops best practices. Define architecture standards and promote engineering excellence across ML-Ops workflows. Innovation & Generative AI Strategy Lead the integration of GenAI capabilities such as RAG and large language models (LLMs) into applications. Identify opportunities to drive business impact through cutting-edge AI technologies and frameworks. Governance & Compliance Implement governance frameworks for model explainability, bias detection, reproducibility, and auditability. Ensure compliance with data privacy, security, and regulatory standards in all ML/AI solutions. Must-Have Skills: 5+ years of experience in ML-Ops, Data Engineering, or Machine Learning. Proficiency in Python, Docker, Kubernetes, and cloud services (AWS/GCP/Azure). Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins, MLflow, or Kubeflow). Deep knowledge of ML pipeline orchestration, model lifecycle management, and monitoring tools. Experience with LLM frameworks (e.g., LangChain, HuggingFace Transformers) and GenAI use cases like RAG. Strong understanding of responsible AI and MLOps governance best practices. Proven ability to work cross-functionally and lead technical discussions. Good-to-Have Skills: Experience with Dataiku DSS or similar platforms (e.g., DataRobot, H2O.ai). Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) for GenAI retrieval tasks. Exposure to tools like Apache Airflow, Argo Workflows, or Prefect for orchestration. Understanding of ML evaluation metrics in a production context (drift detection, data integrity checks). Experience in mentoring, technical leadership, or project ownership roles. Why Join Us? Be at the forefront of AI innovation and shape how cutting-edge technologies drive business transformation. Join a collaborative, forward-thinking team with a strong emphasis on impact, ownership, and learning. Competitive compensation, remote flexibility, and opportunities for career advancement. Skills Artificial Intelligence,Python,ML-OPS About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We are seeking a skilled Associate Manager - AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We are seeking a skilled Associate Manager - AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We are seeking a skilled Associate Manager - AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are seeking a Senior Data Quality Engineer to join our innovative team, where you will drive excellence in database testing, performance optimization, and test automation frameworks. You will leverage advanced Python scripting and database expertise to ensure data integrity and optimize SQL transactions for scalability while working within cutting-edge AI/ML-driven environments. Responsibilities Develop robust Python-based test frameworks for SQL validation, ETL verification, and stored procedure unit testing Automate data-driven testing with tools like pytest, Hypothesis, pandas, and tSQLt Implement AI/ML models for detecting anomalous behaviors in SQL transactions and for test case generation to cover edge scenarios Train Machine Learning models to predict slow queries and optimize database performance through indexing strategies Validate stored procedures, triggers, views, and business rules for consistency and accuracy Apply performance benchmarking with JMeter, SQLAlchemy, and AI-driven anomaly detection methods Conduct data drift detection to analyze and compare staging vs production environments Automate database schema validations using tools such as Liquibase or Flyway in CI/CD workflows Integrate Python test scripts into CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) Design mock database environments to support automated regression testing for complex architectures Collaborate with cross-functional teams to develop scalable and efficient data quality solutions Requirements 5+ years of working experience in data quality engineering or similar roles Proficiency in SQL Server, T-SQL, stored procedures, indexing, and execution plans with a strong foundation in query performance tuning and optimization strategies Background in ETL validation, data reconciliation, and business logic testing for complex datasets Skills in Python programming for test automation, data validation, and anomaly detection with hands-on expertise in pytest, pandas, NumPy, and SQLAlchemy Familiarity with frameworks like Great for developing comprehensive validation processes Competency in integrating automated test scripts into CI/CD environments such as Jenkins, GitHub Actions, and Azure DevOps Showcase of tools like Liquibase or Flyway for schema validation and database migration testing Understanding of implementing AI/ML-driven methods for database testing and optimization Nice to have Knowledge of JMeter or similar performance testing tools for SQL benchmarking Background in AI-based techniques for detecting data drift or training predictive models Expertise in mock database design for highly scalable architectures Familiarity with handling dynamic edge case testing using AI-based test case generation Show more Show less
Posted 1 month ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description: The Future Begins Here: At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity : Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. About The Role We are seeking an innovative and skilled Principal AI/ML Engineer with a strong focus on designing and deploying scalable machine learning solutions. This role requires a strategic thinker who can architect production-ready solutions, collaborate closely with cross-functional teams, and ensure adherence to Takeda’s technical standards through participation in the Architecture Council. The ideal candidate has extensive experience in operationalizing ML models, MLOps workflows, and building systems aligned with healthcare standards. By leveraging cutting-edge machine learning and engineering principles, this role supports Takeda’s global mission of delivering transformative therapies to patients worldwide. How You Will Contribute Architect scalable and secure machine learning systems that integrate with Takeda’s enterprise platforms, including R&D, manufacturing, and clinical trial operations. Design and implement pipelines for model deployment, monitoring, and retraining using advanced MLOps tools such as MLflow, Airflow, and Databricks. Operationalize AI/ML models for production environments, ensuring efficient CI/CD workflows and reproducibility. Collaborate with Takeda’s Architecture Council to propose and refine AI/ML system designs, balancing technical excellence with strategic alignment. Implement monitoring systems to track model performance (accuracy, latency, drift) in a production setting, using tools such as Prometheus or Grafana. Ensure compliance with industry regulations (e.g., GxP, GDPR) and Takeda’s ethical AI standards in system deployment. Identify use cases where machine learning can deliver business value, and propose enterprise-level solutions aligned to strategic goals. Work with Databricks tools and platforms for model management and data workflows, optimizing solutions for scalability. Manage and document the lifecycle of deployed ML systems, including versioning, updates, and data flow architecture. Drive adoption of standardized architecture and MLOps frameworks across disparate teams within Takeda. Skills And Qualifications Education Bachelors or Master’s or Ph.D. in Computer Science, Software Engineering, Data Science, or related field. Experience At least 6-8 years of experience in machine learning system architecture, deployment, and MLOps, with a significant focus on operationalizing ML at scale. Proven track record in designing and advocating ML/AI solutions within enterprise architecture frameworks and council-level decision-making. Technical Skills Proficiency in deploying and managing machine learning pipelines using MLOps tools like MLflow, Airflow, Databricks, or Clear ML. Strong programming skills in Python and experience with machine learning libraries such as Scikit-learn, XGBoost, LightGBM, and TensorFlow. Deep understanding of CI/CD pipelines and tools (e.g., Jenkins, GitHub Actions) for automated model deployment. Familiarity with Databricks tools and services for scalable data workflows and model management. Expertise in building robust observability and monitoring systems to track ML systems in production. Hands-on experience with classical machine learning techniques, such as random forests, decision trees, SVMs, and clustering methods. Knowledge of infrastructure-as-code tools like Terraform or CloudFormation to enable automated deployments. Experience in handling regulatory considerations and compliance in healthcare AI/ML implementations (e.g., GxP, GDPR). Soft Skills Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills for influencing technical and non-technical stakeholders. Leadership ability to mentor teams and drive architecture-standardization initiatives. Ability to manage projects independently and advocate for AI/ML adoption across Takeda. Preferred Qualifications Real-world experience operationalizing machine learning for pharmaceutical domains, including drug discovery, patient stratification, and manufacturing process optimization. Familiarity with ethical AI principles and frameworks, aligned with FAIR data standards in healthcare. Publications or contributions to AI research or MLOps tooling communities. WHAT TAKEDA ICC INDIA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 5 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs No Meeting Days Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks), Maternity Leave (up to 26 weeks), Bereavement Leave (5 days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. Locations: IND - Bengaluru Worker Type: Employee Worker Sub-Type: Regular Time Type: Full time Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Greater Nashik Area
On-site
Job Title: AI- LLMOps Engineer Job Description We're Concentrix. The intelligent transformation partner. Solution-focused. Tech-powered. Intelligence-fueled. The global technology and services leader that powers the world’s best brands, today and into the future. We’re solution-focused, tech-powered, intelligence-fueled. With unique data and insights, deep industry expertise, and advanced technology solutions, we’re the intelligent transformation partner that powers a world that works, helping companies become refreshingly simple to work, interact, and transact with. We shape new game-changing careers in over 70 countries, attracting the best talent. The Concentrix Catalyst team is the driving force behind Concentrix’s transformation, data, and technology services. We integrate world-class digital engineering, creativity, and a deep understanding of human behavior to find and unlock value through tech-powered and intelligence-fueled experiences. We combine human-centered design, powerful data, and strong tech to accelerate transformation at scale. You will be surrounded by the best in the world providing market leading technology and insights to modernize and simplify the customer experience. Within our professional services team, you will deliver strategic consulting, design, advisory services, market research, and contact center analytics that deliver insights to improve outcomes and value for our clients. Hence achieving our vision. Our game-changers around the world have devoted their careers to ensuring every relationship is exceptional. And we’re proud to be recognized with awards such as "World's Best Workplaces," “Best Companies for Career Growth,” and “Best Company Culture,” year after year. Join us and be part of this journey towards greater opportunities and brighter futures. Position Overview We are seeking a skilled LLMOps Engineer with expertise in operationalizing Generative AI solutions to join our AI Engineering Center of Excellence. This role will focus on establishing robust infrastructure, deployment pipelines, and monitoring systems to ensure the reliable, secure, and scalable delivery of LLM-based applications in production environments. The LLMOps Engineer will work closely with AI Tech Leads and Senior Engineers to bridge the gap between development and production deployment of GenAI solutions. Primary Responsibilities Design and implement infrastructure and deployment pipelines for large language model (LLM) applications in production environments Establish monitoring, observability, and logging systems for GenAI applications to ensure performance, reliability, and data quality Develop automated testing frameworks specific to LLM applications, including evaluation of model outputs and prompt effectiveness Implement version control systems for models, prompts, and configurations to ensure reproducibility and traceability Create and maintain CI/CD pipelines for seamless deployment of GenAI solutions Optimize infrastructure and implementations for cost efficiency, considering compute resources and API usage Implement security controls and compliance measures specific to GenAI applications Collaborate with development teams to establish best practices for transitioning GenAI solutions from prototype to production Automate feedback loops for continuous improvement of deployed models Document operational procedures, architecture decisions, and maintenance protocols Required Qualifications 5+ years of experience in DevOps, platform engineering, or related roles with at least 2+ years focused on ML/AI systems Hands-on experience with cloud infrastructure and services for AI workloads (AWS, Azure, GCP) Strong programming skills in languages commonly used for infrastructure and automation (Bash, YAML) Experience with containerization and orchestration technologies (Docker, Kubernetes) for AI workloads Knowledge of LLM deployment patterns and associated infrastructure requirements Familiarity with monitoring tools and techniques for AI systems (e.g., model performance, drift detection, cost tracking) Understanding of CI/CD principles and experience implementing automated pipelines Experience with infrastructure-as-code tools (Terraform, CloudFormation, etc.) Basic understanding of LLM architectures and their operational requirements Bachelor's degree in Computer Science, Engineering, or related technical fieldd Preferred Skills Experience deploying and managing production LLM applications at scale Knowledge of vector database operations and optimization for RAG implementations Familiarity with API gateway management and rate limiting strategies Experience with distributed tracing and debugging complex AI systems Understanding of data privacy, security, and compliance considerations for GenAI applications Knowledge of cost optimization techniques for LLM inference and embedding generation Experience with feature flagging and A/B testing frameworks for AI applications Familiarity with LLM evaluation metrics and automated testing approaches Experience with GPU resource management and optimization Success Factors Strong technical curiosity and willingness to explore new GenAI capabilities Balance between operational excellence and enabling rapid innovation Strong problem-solving skills for troubleshooting complex production issues Effective communication across technical and non-technical stakeholders Proactive approach to identifying and mitigating operational risks Ability to translate business requirements into operational specifications Commitment to continuous improvement of operational processes Adaptability to rapidly evolving GenAI technologies and deployment patterns Location: IND Work-at-Home Language Requirements: Time Type: Full time If you are a California resident, by submitting your information, you acknowledge that you have read and have access to the Job Applicant Privacy Notice for California Residents R1598486 Show more Show less
Posted 1 month ago
5.0 years
3 - 5 Lacs
Hyderābād
On-site
JLL supports the Whole You, personally and professionally. Our people at JLL are shaping the future of real estate for a better world by combining world class services, advisory and technology to our clients. We are committed to hiring the best, most talented people in our industry; and we support them through professional growth, flexibility, and personalized benefits to manage life in and outside of work. Whether you’ve got deep experience in commercial real estate, skilled trades, and technology, or you’re looking to apply your relevant experience to a new industry, we empower you to shape a brighter way forward so you can thrive professionally and personally. The BMS Engineer is responsible for implementing and maintaining Building Management Systems that control and monitor various building functions such as HVAC, lighting, security, and energy management. This role requires a blend of technical expertise, problem-solving skills, and the ability to work with diverse stakeholders. Required Qualifications and skills: Diploma/Bachelor's degree in Electrical / Mechanical Engineering or related field 5+ years of experience in BMS Operations, Design implementation, and maintenance Proficiency in BMS software platforms (e.g. Schneider Electric, Siemens, Johnson Controls) Strong understanding of HVAC systems and building operations Knowledge of networking protocols (e.g. BACnet, Modbus, LonWorks) Familiarity with energy management principles and sustainability practices Excellent problem-solving and analytical skills Strong communication and interpersonal abilities Ability to work independently and as part of a team Preferred Qualifications: Professional engineering license (P.E.) or relevant industry certifications Experience with integration of IoT devices and cloud-based systems Knowledge of building codes and energy efficiency standards Project management experience Programming skills (e.g., Python, C++, Java) Roles and Responsibilities of BMS Engineer 1. Troubleshoot and resolve issues with BMS 2. Optimize building performance and energy efficiency through BMS tuning 3. Check LL BMS critical parameters & communicate with LL in case parameters go beyond operating threshold 4. Develop and maintain system documentation and operational procedures. Monitor BMS OEM PPM schedule & ensure diligent execution. Monitor SLAs & inform WTSMs in the event of breach. 5. Ensure real time monitoring of Hot / Cold Prism Tickets & resolve on priority. 6. Preparation of Daily / Weekly & Monthly reports comprising of Uptime / Consumption with break up / Temperature trends / Alarms & equipment MTBF 7. Ensure adherence to Incident escalation process & training to Ground staff. 8. Coordination with BMS OEM for ongoing operational issues (Graphics modification/ sensor calibration / controller configuration / Hardware replacement) 9. Supporting annual power down by gracefully shutting down the system & bringing up post completion of the activity. 10. Ensure healthiness of FLS (Panels / Smoke Detectors) & conduct periodic check for drift levels. 11. Provide technical support and training to facility management team 12. Collaborate with other engineering disciplines, WPX Team and project stakeholders and make changes to building environment if so needed. If this job description resonates with you, we encourage you to apply even if you don’t meet all of the requirements below. We’re interested in getting to know you and what you bring to the table! Personalized benefits that support personal well-being and growth: JLL recognizes the impact that the workplace can have on your wellness, so we offer a supportive culture and comprehensive benefits package that prioritizes mental, physical and emotional health. About JLL – We’re JLL—a leading professional services and investment management firm specializing in real estate. We have operations in over 80 countries and a workforce of over 102,000 individuals around the world who help real estate owners, occupiers and investors achieve their business ambitions. As a global Fortune 500 company, we also have an inherent responsibility to drive sustainability and corporate social responsibility. That’s why we’re committed to our purpose to shape the future of real estate for a better world. We’re using the most advanced technology to create rewarding opportunities, amazing spaces and sustainable real estate solutions for our clients, our people, and our communities. Our core values of teamwork, ethics and excellence are also fundamental to everything we do and we’re honored to be recognized with awards for our success by organizations both globally and locally. Creating a diverse and inclusive culture where we all feel welcomed, valued and empowered to achieve our full potential is important to who we are today and where we’re headed in the future. And we know that unique backgrounds, experiences and perspectives help us think bigger, spark innovation and succeed together.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France