Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Overview Cvent’s Global Demand Center is seeking an organized, strategic marketing professional with AI experience to join our team as an Assistant Team Lead, Marketing Technology. Our ideal candidate is a skilled project manager with a passion for marketing technology, an understanding of how marketing systems intersect, and an eagerness to discover new solutions for business needs. At Cvent, you'll be part of a dynamic team that values innovation and creativity. You'll have the opportunity to work with cutting-edge technology and help drive our marketing efforts to new heights. If you're passionate about marketing technology and AI, and thrive in a collaborative environment, we want to hear from you! In This Role, You Will Manage and Optimize AI-Driven Marketing Efforts: Oversee our end-to-end content supply chain and conversational AI initiatives, ensuring streamlined processes, especially those involving AI. Technical Expertise: Serve as a technical expert, onboarding new technologies and optimizing the use of existing tools in our marketing technology stack. Enablement and Training: Lead marketing technology enablement and training to ensure the marketing team fully utilizes the capabilities of our tools. Administration of AI Systems: Administer marketing AI systems (e.g., Conversational Email, chat AI, User Gems AI), build prompts and agents, and ensure effective tagging and categorization. Reporting and ROI Analysis: Assist marketing teams in reporting on the ROI of AI initiatives and participate in the Cvent AI council. Gap Identification and Requirement Development: Identify gaps and develop requirements for the automation of manual tasks to enhance marketing efficiency and effectiveness. Collaboration and Implementation: Collaborate with marketing team members to implement efficient AI strategies across different teams. Participate in the Cvent Machine Learning Academy. Evaluation of New Technologies: Evaluate new AI-focused marketing technologies for alignment with business objectives. Stay Updated on AI Trends: Stay abreast of the latest AI trends and innovations, recommending and implementing new tools or practices to enhance marketing efforts. Here's What You Need Bachelor’s/Master’s degree in Marketing, Business, or a related field. Exceptional project management skills, including attention to detail, stakeholder engagement, project plan development, and deadline management with diverse teams. Advanced understanding of AI concepts and significant hands-on experience with tools like ChatGPT, Microsoft Azure, Claude, Google Gemini, Glean, etc. Skilled in crafting technical documentation and simplifying complex procedures. A minimum of 5 years of hands-on technical experience with various marketing technologies like marketing automation platforms, CRM and database platforms (e.g., Salesforce, Snowflake) and other tools (e.g., Drift, Cvent, 6Sense, Writer, Jasper.ai, Copy.ai) Strong capacity for understanding and fulfilling project requirements and expectations. Excellent communication and collaboration skills, with a strong command of the English language. Self-motivated, analytical, eager to learn, and able to thrive in a team environment.
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
CWX is looking for a dynamic SENIOR AI/ML ENGINEER to become a vital part of our vibrant PROFESSIONAL SERVICES TEAM , working on-site in Hyderabad . Join the energy and be part of the momentum! At CloudWerx, we're looking for a Senior AI/ML Engineer to lead the design, development, and deployment of tailored AI/ML solutions for our clients. In this role, you'll work closely with clients to understand their business challenges and build innovative, scalable, and cost-effective solutions using tools like Google Cloud Platform (GCP), Vertex AI, Python, PyTorch, LangChain, and more. You'll play a key role in translating real-world problems into robust machine learning architectures, with a strong focus on Generative AI, multi-agent systems, and modern MLOps practices. From data preparation and ensuring data integrity to building and optimizing models, you'll be hands-on across the entire ML lifecycle — all while ensuring seamless deployment and scaling using cloud-native infrastructure. Clear communication will be essential as you engage with both technical teams and business stakeholders, making complex AI concepts understandable and actionable. Your deep expertise in model selection, optimization, and deployment will help deliver high-performing solutions tailored to client needs. We're also looking for someone who stays ahead of the curve — someone who's constantly learning and experimenting with the latest developments in generative AI, LLMs, and cloud technologies. Your curiosity and drive will help push the boundaries of what's possible and fuel the success of the solutions we deliver. This is a fantastic opportunity to join a fast-growing, engineering-led cloud consulting company that tackles some of the toughest challenges in the industry. At CloudWerx, every team member brings something unique to the table, and we foster a supportive environment that helps people do their best work. Our goal is simple: to be the best at what we do and help our clients accelerate their businesses through world-class cloud solutions. This role is an immediate full time position. Insight on your impact Conceptualize, Prototype, and Implement AI Solutions: Design and deploy advanced AI solutions using large language models (LLMs), diffusion models, and multimodal AI systems by leveraging Google Cloud tools such as Vertex AI, AutoML, and AI Platform (Agent Builder). Implement Retrieval-Augmented Generation (RAG) pipelines for chatbots and assistants, and create domain-specific transformers for NLP, vision, and cross-modal applications. Utilize Document AI, Translation AI, and Vision AI to develop full-stack, multimodal enterprise applications. Technical Expertise: models via LoRA, QLoRA, RLHF, and Dreambooth. Build multi-agent systems using Agent Development Kit (ADK), Agent-to-Agent (A2A) Protocol, and Model Context Protocol (MCP). Provide thought leadership on best practices, architecture patterns, and technical decisions across LLMs, generative AI, and custom ML pipelines, tailored to each client's unique business needs. Stakeholder Communication: Effectively communicate complex AI/ML concepts, architectures, and solutions to business leaders, technical teams, and non-technical stakeholders. Present project roadmaps, performance metrics, and model validation strategies to C-level executives and guide organizations through AI transformation initiatives. Understand client analytics & modeling needs: Collaborate with clients to extract, analyze, and interpret both internal and external data sources. Design and operationalize data pipelines that support exploratory analysis and model development, enabling business-aligned data insights and AI solutions. Database Management: Work with structured (SQL/BigQuery) and unstructured (NoSQL/Firestore, Cloud Storage) data. Apply best practices in data quality, versioning, and integrity across datasets used for training, evaluation, and deployment of AI/ML models. Cloud Expertise: Architect and deploy cloud-native AI/ML solutions using Google Cloud services including Vertex AI, BigQuery ML, Cloud Functions, Cloud Run, and GKE Autopilot. Provide consulting on GCP service selection, infrastructure scaling, and deployment strategies aligned with client requirements. MLOps & DevOps: Lead the implementation of robust MLOps and LLMOps pipelines using TensorFlow Extended (TFX), Kubeflow, and Vertex AI Pipelines. Set up CI/CD workflows using Cloud Build and Artifact Registry, and deploy scalable inference endpoints through Cloud Run and Agent Engine. Establish automated retraining, drift detection, and monitoring strategies for production ML systems. Prompt Engineering and fine tuning: Apply advanced prompt engineering strategies (e.g., few-shot, in-context learning) to optimize LLM outputs. Fine-tune models using state-of-the-art techniques including LoRA, QLoRA, Dreambooth, ControlNet, and RLHF to enhance instruction-following and domain specificity of generative models. LLMs, Chatbots & Text Processing: Develop enterprise-grade chatbots and conversational agents using Retrieval-Augmented Generation (RAG), powered by both open-source and commercial LLMs. Build state-of-the-art generative solutions for tasks such as intelligent document understanding, summarization, and sentiment analysis. Implement LLMOps workflows for lifecycle management of large-scale language applications. Consistently Model and Promote Engineering Best Practices: Promote a culture of technical excellence by adhering to software engineering best practices including version control, reproducibility, structured documentation, Agile retrospectives, and continuous integration. Mentor junior engineers and establish guidelines for scalable, maintainable AI/ML development. Our Diversity and Inclusion Commitment At CloudWerx, we are dedicated to creating a workplace that values and celebrates diversity. We believe that a diverse and inclusive environment fosters innovation, collaboration, and mutual respect. We are committed to providing equal employment opportunities for all individuals, regardless of background, and actively promote diversity across all levels of our organization. We welcome all walks of life, as we are committed to building a team that embraces and mirrors a wide range of perspectives and identities. Join us in our journey toward a more inclusive and equitable workplace. Background Check Requirement All candidates for employment will be subject to pre-employment background screening for this position. All offers are contingent upon the successful completion of the background check. For additional information on the background check requirements and process, please reach out to us directly. Our Story CloudWerx is an engineering-focused cloud consulting firm born in Silicon Valley - in the heart of hyper-scale and innovative technology. In a cloud environment we help businesses looking to architect, migrate, optimize, secure or cut costs. Our team has unique experience working in some of the most complex cloud environments at scale and can help businesses accelerate with confidence.
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
CWX is looking for a dynamic SENIOR AI/ML ENGINEER to become a vital part of our vibrant PROFESSIONAL SERVICES TEAM , working on-site in Hyderabad . Join the energy and be part of the momentum! At CloudWerx, we're looking for a Senior AI/ML Engineer to lead the design, development, and deployment of tailored AI/ML solutions for our clients. In this role, you’ll work closely with clients to understand their business challenges and build innovative, scalable, and cost-effective solutions using tools like Google Cloud Platform (GCP), Vertex AI, Python, PyTorch, LangChain, and more. You’ll play a key role in translating real-world problems into robust machine learning architectures, with a strong focus on Generative AI, multi-agent systems, and modern MLOps practices. From data preparation and ensuring data integrity to building and optimizing models, you’ll be hands-on across the entire ML lifecycle — all while ensuring seamless deployment and scaling using cloud-native infrastructure. Clear communication will be essential as you engage with both technical teams and business stakeholders, making complex AI concepts understandable and actionable. Your deep expertise in model selection, optimization, and deployment will help deliver high-performing solutions tailored to client needs. We’re also looking for someone who stays ahead of the curve — someone who’s constantly learning and experimenting with the latest developments in generative AI, LLMs, and cloud technologies. Your curiosity and drive will help push the boundaries of what's possible and fuel the success of the solutions we deliver. This is a fantastic opportunity to join a fast-growing, engineering-led cloud consulting company that tackles some of the toughest challenges in the industry. At CloudWerx, every team member brings something unique to the table, and we foster a supportive environment that helps people do their best work. Our goal is simple: to be the best at what we do and help our clients accelerate their businesses through world-class cloud solutions. This role is an immediate full time position. Insight on your impact Conceptualize, Prototype, and Implement AI Solutions: Design and deploy advanced AI solutions using large language models (LLMs), diffusion models, and multimodal AI systems by leveraging Google Cloud tools such as Vertex AI, AutoML, and AI Platform (Agent Builder). Implement Retrieval-Augmented Generation (RAG) pipelines for chatbots and assistants, and create domain-specific transformers for NLP, vision, and cross-modal applications. Utilize Document AI, Translation AI, and Vision AI to develop full-stack, multimodal enterprise applications. Technical Expertise: models via LoRA, QLoRA, RLHF, and Dreambooth. Build multi-agent systems using Agent Development Kit (ADK), Agent-to-Agent (A2A) Protocol, and Model Context Protocol (MCP). Provide thought leadership on best practices, architecture patterns, and technical decisions across LLMs, generative AI, and custom ML pipelines, tailored to each client’s unique business needs. Stakeholder Communication: Effectively communicate complex AI/ML concepts, architectures, and solutions to business leaders, technical teams, and non-technical stakeholders. Present project roadmaps, performance metrics, and model validation strategies to C-level executives and guide organizations through AI transformation initiatives. Understand client analytics & modeling needs:Collaborate with clients to extract, analyze, and interpret both internal and external data sources. Design and operationalize data pipelines that support exploratory analysis and model development, enabling business-aligned data insights and AI solutions. Database Management: Work with structured (SQL/BigQuery) and unstructured (NoSQL/Firestore, Cloud Storage) data. Apply best practices in data quality, versioning, and integrity across datasets used for training, evaluation, and deployment of AI/ML models. Cloud Expertise: Architect and deploy cloud-native AI/ML solutions using Google Cloud services including Vertex AI, BigQuery ML, Cloud Functions, Cloud Run, and GKE Autopilot. Provide consulting on GCP service selection, infrastructure scaling, and deployment strategies aligned with client requirements. MLOps & DevOps: Lead the implementation of robust MLOps and LLMOps pipelines using TensorFlow Extended (TFX), Kubeflow, and Vertex AI Pipelines. Set up CI/CD workflows using Cloud Build and Artifact Registry, and deploy scalable inference endpoints through Cloud Run and Agent Engine. Establish automated retraining, drift detection, and monitoring strategies for production ML systems. Prompt Engineering and fine tuning: Apply advanced prompt engineering strategies (e.g., few-shot, in-context learning) to optimize LLM outputs. Fine-tune models using state-of-the-art techniques including LoRA, QLoRA, Dreambooth, ControlNet, and RLHF to enhance instruction-following and domain specificity of generative models. LLMs, Chatbots & Text Processing:Develop enterprise-grade chatbots and conversational agents using Retrieval-Augmented Generation (RAG), powered by both open-source and commercial LLMs. Build state-of-the-art generative solutions for tasks such as intelligent document understanding, summarization, and sentiment analysis. Implement LLMOps workflows for lifecycle management of large-scale language applications. Consistently Model and Promote Engineering Best Practices: Promote a culture of technical excellence by adhering to software engineering best practices including version control, reproducibility, structured documentation, Agile retrospectives, and continuous integration. Mentor junior engineers and establish guidelines for scalable, maintainable AI/ML development. Our Diversity and Inclusion Commitment At CloudWerx, we are dedicated to creating a workplace that values and celebrates diversity. We believe that a diverse and inclusive environment fosters innovation, collaboration, and mutual respect. We are committed to providing equal employment opportunities for all individuals, regardless of background, and actively promote diversity across all levels of our organization. We welcome all walks of life, as we are committed to building a team that embraces and mirrors a wide range of perspectives and identities. Join us in our journey toward a more inclusive and equitable workplace. Background Check Requirement All candidates for employment will be subject to pre-employment background screening for this position. All offers are contingent upon the successful completion of the background check. For additional information on the background check requirements and process, please reach out to us directly. Our Story CloudWerx is an engineering-focused cloud consulting firm born in Silicon Valley - in the heart of hyper-scale and innovative technology. In a cloud environment we help businesses looking to architect, migrate, optimize, secure or cut costs. Our team has unique experience working in some of the most complex cloud environments at scale and can help businesses accelerate with confidence.
Posted 1 month ago
10.0 years
0 Lacs
Ghaziabad, Uttar Pradesh, India
On-site
Urgent Hiring || Thermal Engineer || Ghaziabad Profile: Principal Thermal Engineer Experience: Min 10+ years Salary : Upto 25 LPA(Depend on the interview) Location : Sahibabad next to Ghaziabad Key Responsibilities: Thermal System Design & Optimization: Perform advanced thermal calculations to optimize heat exchangers, cooling towers, and energy recovery systems. Develop thermodynamic models (Rankine, Organic Rankine, Brayton, Refrigeration cycles) to enhance system efficiency. Utilize CFD and FEA simulations for heat transfer, pressure drop, and flow distribution analysis. Conduct real-time performance monitoring and diagnostics for industrial thermal systems. Drive continuous improvement initiatives in thermal management, reducing energy losses. Waste Heat Recovery & Thermal Audits: Lead comprehensive thermal audits, evaluating waste heat potential and energy savings opportunities. Develop and implement waste heat recovery systems for industrial processes. Assess and optimize heat-to-power conversion strategies for enhanced energy utilization. Conduct feasibility studies for thermal energy storage and process integration. Heat Exchangers & Cooling Tower Performance: Design and analyze heat exchangers (shell & tube, plate, finned, etc.) for optimal heat transfer efficiency. Enhance cooling tower performance, focusing on heat rejection, drift loss reduction, and water treatment strategies. Oversee component selection, performance evaluation, and failure analysis for industrial cooling systems. Troubleshoot thermal inefficiencies and recommend design modifications. Material Selection & Engineering Compliance: Guide material selection for high-temperature and high-pressure thermal applications. Evaluate thermal conductivity, corrosion resistance, creep resistance, and mechanical properties. - Ensure all designs adhere to TEMA, ASME, API, CTI (Cooling Technology Institute), and industry standards. Leadership & Innovation: Lead multi-disciplinary engineering teams to develop cutting-edge thermal solutions. Collaborate with manufacturing, R&D, and operations teams for process improvement. Provide technical mentorship and training to junior engineers. Stay ahead of emerging technologies in heat transfer, renewable energy, and thermal system efficiency. Required Skills & Qualifications: Bachelor's/Master's/PhD in Mechanical Engineering, Thermal Engineering, or a related field. 10+ years of industry experience, specializing in thermal calculations, heat exchanger design, and waste heat recovery. Expertise in heat transfer, mass transfer, thermodynamics, and fluid mechanics. Hands-on experience with thermal simulation tools (ANSYS Fluent, Aspen Plus, MATLAB, COMSOL, EES). Strong background in thermal audits, cooling tower performance enhancement, and process heat recovery. Experience in industrial energy efficiency, power plant optimization, and heat recovery applications. In-depth knowledge of high-temperature alloys, corrosion-resistant materials, and structural analysis. Strong problem-solving skills with a research-driven and analytical mindset. Ability to lead projects, manage teams, and drive technical innovation. Preferred Qualifications: Experience in power plants, ORC (Organic Rankine Cycle) systems, and industrial energy recovery projects. Expertise in advanced material engineering for high-performance thermal systems. Publications or patents in heat transfer, waste heat recovery, or energy efficiency technologies. Compensation & Benefits: Competitive salary based on expertise and industry standards. Performance-based incentives and growth opportunities. Health and insurance benefits. Opportunities for leadership and R&D involvement.
Posted 1 month ago
5.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Summary We’re seeking a hands-on GenAI & Computer Vision Engineer with 3–5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model development—from research and fine-tuning through deployment, monitoring, and iteration. In this role, you’ll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challenges—augmentation, domain adaptation, semi-supervised learning—and mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally Bachelor’s or Master’s in Computer Science, Electrical Engineering, AI/ML, or a related field 3–5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges You’ll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In Our Website - https://aurigait.com/
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
We're seeking a Mid-Level Machine Learning Engineer to join our growing Data Science & Engineering team. In this role, you will design, develop, and deploy ML models that power our cutting-edge technologies like voice ordering, prediction algorithms and customer-facing analytics. You'll collaborate closely with data engineers, backend engineers, and product managers to take models from prototyping through to production, continuously improving accuracy, scalability, and maintainability. Essential Job Functions Model Development: Design and build next-generation ML models using advanced tools like PyTorch, Gemini, and Amazon SageMaker - primarily on Google Cloud or AWS platforms Feature Engineering: Build robust feature pipelines; extract, clean, and transform largescale transactional and behavioral data. Engineer features like time- based attributes, aggregated order metrics, categorical encodings (LabelEncoder, frequency encoding) Experimentation & Evaluation: Define metrics, run A/B tests, conduct cross-validation, and analyze model performance to guide iterative improvements. Train and tune regression models (XGBoost, LightGBM, scikit-learn, TensorFlow/Keras) to minimize MAE/RMSE and maximize R² Own the entire modeling lifecycle end-to-end, including feature creation, model development, testing, experimentation, monitoring, explainability, and model maintenance Monitoring & Maintenance: Implement logging, monitoring, and alerting for model drift and data-quality issues; schedule retraining workflows Collaboration & Mentorship: Collaborate closely with data science, engineering, and product teams to define, explore, and implement solutions to open-ended problems that advance the capabilities and applications of Checkmate, mentor junior engineers on best practices in ML engineering Documentation & Communication: Produce clear documentation of model architecture, data schemas, and operational procedures; present findings to technical and non-technical stakeholders Requirements Academics: Bachelors/Master's degree in Computer Science, Engineering, Statistics, or related field Experience: 5+ years of industry experience (or 1+ year post-PhD). Building and deploying advanced machine learning models that drive business impact Proven experience shipping production-grade ML models and optimization systems, including expertise in experimentation and evaluation techniques. Hands-on experience building and maintaining scalable backend systems and ML inference pipelines for real-time or batch prediction Programming & Tools: Proficient in Python and libraries such as pandas, NumPy, scikit-learn; familiarity with TensorFlow or PyTorch. Hands-on with at least one cloud ML platform (AWS SageMaker, Google Vertex AI, or Azure ML). Data Engineering: Hands-on experience with SQL and NoSQL databases; comfortable working with Spark or similar distributed frameworks. Strong foundation in statistics, probability, and ML algorithms like XGBoost/LightGBM; ability to interpret model outputs and optimize for business metrics. Experience with categorical encoding strategies and feature selection. Solid understanding of regression metrics (MAE, RMSE, R²) and hyperparameter tuning. Cloud & DevOps: Proven skills deploying ML solutions in AWS, GCP, or Azure; knowledge of Docker, Kubernetes, and CI/CD pipelines Collaboration: Excellent communication skills; ability to translate complex technical concepts into clear, actionable insights Working Terms: Candidates must be flexible and work during US hours at least until 6 p.m. ET in the USA, which is essential for this role & must also have their own system/work setup for remote work
Posted 1 month ago
3.0 years
0 Lacs
India
On-site
Dexcent Inc. has an opportunity for a Advanced Process Control Engineer to join our growing team in Fort McMurray, AB. The individual will be required to be on site and to relocate to Canada. This opportunity is open to candidates both locally and internationally. We are prepared to support relocation for successful candidates, including assistance with visa sponsorship where applicable. About Dexcent Founded in 2006, Dexcent Inc. (Dexcent) is an engineering consulting firm that provides a range of specialized solutions for clients in a variety of industries throughout the world. Our professionals have modernized IT and OT engineering methodologies into comprehensive solutions, specializing in information analytics, cyber-security, infrastructure, and control systems engineering. As such, we pride ourselves on truly transforming industrial operations to optimize business performance and deliver bottom-line results. About the Candidate Our ideal candidate has the following skills and experience: Bachelor’s Degree in Chemical Engineering, Process Engineering, Electrical Engineering, Control Systems Engineering, or a related field preferred. Advanced Process Control (APC) Certification from recognized institutions or organizations offering specialized training in APC technologies and methodologies preferred. 3+ years experience with OSIsoft PI Historian for real-time data acquisition, storage, analysis, and visualization in an industrial or process control environment is required. 5+ years experience with Honeywell Experion PKS, ESVT/EST, Honeywell TPS / TDC3000. Process controllers: C300, C200, HPM, APM. All variety of third-party PLC/devices integration to DCS. Industrial Communication: Modbus RTU & Modbus TCP/IP, Serial Communication RS 485, RS 422, RS 232. Networking Skills: Fault Tolerant Ethernet (FTE), Ethernet. Configure and Administer Windows domain (Experion and TPS Domains). OPC connectivity between Process control servers and other 3rd party servers. MS Excel-macro based tools. Experience configuring, implementing and trouble-shooting process control schemes in Honeywell TDC3000 (basic control algorithms and CL/AM code/PMX(basic control algorithms. Key Responsibilities Routine monitoring of APCs – this includes troubleshooting of APC related issues, direct communication with Process Engineers and Panel operators to address any issues affecting their ability to keep the APCs online, ensuring the optimization of the APCs are being met and evaluating the performance of soft sensors (inferentials) to ensure they do not drift from laboratory readings. Reporting to relevant stakeholders about the APC performance – this includes a weekly APC report, Monthly Inferential and APC Stewardship reports and Monthly, Quarterly and Year to Date KPIs. Development of new APC solutions and maintenance of existing applications. With the APC team being Regional this maybe at any of our client sites. APC training of Operators and other interested parties. Data Analysis – this is done in Excel or a number of different applications such as SEEQ or Honeywell Suite of products. Support software and hardware upgrade activities that involve APC related equipment. Other related duties as required. At Dexcent we recognize that people are our most important asset. We appreciate the diverse experience, knowledge, creativity, and personality that each person brings to our organization, our culture, and our success. If this sounds like the right career opportunity for you, apply today!
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Data Quality Engineering Support: (Manager, Data Operations & Management) As the Data Quality Engineering Support Manager, you will be responsible for implementing, scaling, and supporting enterprise-wide data quality frameworks across cloud-native data platforms. You will drive hands-on initiatives to monitor, validate, and reconcile data across ingestion, processing, and consumption layers—enabling trusted data for analytics, AI/ML, and operational workflows. This role requires a blend of technical depth, platform expertise (e.g., BigQuery, Redshift, GCP, AWS), and collaboration across engineering, governance, and product teams. Who we are looking for: Primary Responsibilities: Data Quality Engineering & Monitoring: Implement and support automated data quality checks for accuracy, completeness, timeliness, and consistency across datasets. Develop validation frameworks for ingestion pipelines, curated layers, and reporting models in platforms like BigQuery and Redshift. Integrate data quality controls into CI/CD pipelines and orchestration tools (e.g., Airflow, Cloud Composer). Data Quality Operations: Respond to and resolve data quality incidents and discrepancies across data domains and systems. Collaborate with engineering and product teams to implement root cause analysis and build long-term remediation strategies. Establish SLAs and alerting thresholds for data quality KPIs. Cloud Platform Integration Deploy scalable data quality monitoring solutions across GCP (BigQuery, Cloud Storage) and AWS (Redshift, S3, Glue). Leverage platform-native services and third-party tools for automated profiling, rule enforcement, and anomaly detection. Governance Alignment: Partner with data governance teams to align quality rules with business glossary terms, reference data, and stewardship models. Integrate quality metadata into governance tools such as Collibra, enabling lineage and audit tracking. Documentation, Enablement & Collaboration Maintain playbooks, documentation, and automated reporting for quality audits and exception handling. Collaborate with data owners, analysts, and data product teams to promote a culture of data trust and shared ownership. Provide training and knowledge-sharing to enable self-service quality monitoring and issue triaging. Skill: 5+ years of experience in data quality engineering, data operations, or data pipeline support, ideally in a cloud-first environment. Hands-on expertise in: Building and managing data quality checks (e.g., null checks, duplicates, data drift, schema mismatches) SQL and Python for quality validation and automation Working with cloud-native data stacks: BigQuery, Redshift, GCS, S3 Data quality monitoring tools or frameworks (e.g., Dataplex, Lightup, Collibra) Strong troubleshooting skills across distributed data systems and pipelines. Bachelor’s degree in Computer Science, Information Systems, or related field. Preferred Experience: Experience in Retail or Quick Service Restaurant (QSR) environments with high-volume data ingestion and reporting requirements. Familiarity with data governance platforms (e.g., Collibra) and integration of quality rules into metadata models. Exposure to AI/ML data pipelines and the impact of data quality on model performance. Current GCP Associates (or Professional) Certification. Work location: Hyderabad, India Work hours: Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable[LG1] accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment.
Posted 1 month ago
3.0 - 5.0 years
3 - 6 Lacs
Hyderābād
On-site
Experience Range: 3 to 5 Years Notice Period: Immediate Joiners Preferred Interview Rounds: 1 Internal Technical Round 1 Client Round (Face-to-Face Mandatory) Key Responsibilities Build and maintain robust MLOps pipelines to automate machine learning workflows from data ingestion to model deployment. Develop production-grade CI/CD pipelines for model development and deployment using tools such as Git, Jenkins, and Docker. Work with Data Scientists , Data Engineers , and DevOps teams to enable continuous integration and delivery of ML models. Use GCP services like BigQuery , Dataproc , and Cloud Composer (Airflow) to orchestrate and scale ML jobs and data pipelines. Ensure model versioning , data lineage , and reproducibility of results across various environments. Implement monitoring and logging solutions to track ML model performance post-deployment. Troubleshoot issues in data pipelines, model drift, or deployment errors in real-time production settings. Must-Have Skills 3–5 years of experience in MLOps and Machine Learning Engineering roles. Strong command over Python , especially for ML and data pipeline development. Experience with PySpark for handling large-scale distributed data processing. Hands-on expertise with key GCP services BigQuery – for scalable data warehousing and analytics Dataproc – for managed Spark/Hadoop workloads Cloud Composer (Airflow) – for orchestration and workflow automation Deep understanding of model training, validation, deployment , and monitoring lifecycles . Proficiency in setting up and maintaining CI/CD pipelines for ML applications. Knowledge of containerization (Docker) and version control (Git). Good-to-Have Skills Exposure to ML frameworks like TensorFlow, PyTorch, or Scikit-learn. Familiarity with Kubernetes or other orchestration tools. Experience working in regulated industries such as BFSI or Healthcare. Job Types: Full-time, Permanent Schedule: Monday to Friday Application Question(s): Will you be able to attend a inperson interview in Hyderabad Work Location: In person
Posted 1 month ago
5.0 years
0 Lacs
Hyderābād
On-site
Overview: We are seeking a skilled Associate Manager – AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Responsibilities: Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities: • Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications: 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics.
Posted 1 month ago
5.0 years
0 Lacs
Hyderābād
On-site
Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role As a Systems Programmer at Kyndryl, you'll have the opportunity to shape the very foundation of the technology that drives our world. Your work will involve developing, testing, and maintaining the software that controls a computer's operating system, hardware, and other systems software. You'll be a master troubleshooter and problem-solver, with the skills to fix even the most complex issues that arise. Not only will you be responsible for ensuring the security of our computer systems, but you'll also work closely with other IT professionals to design and implement cutting-edge technology that keeps Kyndryl ahead of the curve. In this role, you'll provide the underlying Mainframe operating system platform programming and DBDC subsystem programming support that forms the backbone of our applications. You'll guide functional objectives on technologies and make use of your expert knowledge to drive solutions to complex problems. As a leader in this field, you'll also be expected to conduct RCA discussions for the products you work on and provide ongoing technical and operational guidance to lead professional work teams. You may even manage departments on a national or international level, defining objectives and managing resources to ensure the success of your projects. Your expertise will be crucial in influencing people outside of your department or function, and you'll have the opportunity to directly shape the technology landscape of the world we live in. If you're looking for an exciting and challenging role in the fast-paced world of systems programming, Kyndryl is the place for you! Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major hyperscaler platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience 5+ years of experience in z/OS System Administration environment with work management, user management, journal management and performance management. Responsible for Installation, Maintenance and upgrades for z/OS and ISV products. Responsible for Problem determination and remediation. Work closely with customer, vendor and internal teams to remediate problems. Adopt the given technology to meet the drift of customer and business requirements. Act as support and domain expert for z/OS operating system and system components; provide direct technical support as needed in the planning, coordination, installation, implementation and testing of releases, upgrades, or changes to z/OS operating system, network, and component software. • Primary support for assigned ISV products along with diagnose z/OS platform system and product issues and follow up with root cause analysis • Analyze performance issues while providing technical consultation and inquiries from the other IT technical teams. • Support for new product installation and evaluation as needed. Hands on expertise/Knowledge in z/OS, SMP/E, SMF, RMF, GRS, JCL, WLM, RACF, VTAM, TCPIP, USS/OMVS etc. Preferred Skills and Experience • Bachelor’s degree in computer science or a related field. • Troubleshoot Infrastructure and Application issues related to z/OS. • Participate in Disaster Recovery planning and tests as scheduled Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.
Posted 1 month ago
1.0 - 2.0 years
1 - 5 Lacs
Gurgaon
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company and a leader in the convenience store and fuel space with over 16,700 stores. It has footprints across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team, and the Associate ML Ops Analyst will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. About the role The incumbent will be responsible for implementing Azure data services to deliver scalable and sustainable solutions, build model deployment and monitor pipelines to meet business needs. Roles & Responsibilities Development and Integration Collaborate with data scientists to deploy ML models into production environments Implement and maintain CI/CD pipelines for machine learning workflows Use version control tools (e.g., Git) and ML lifecycle management tools (e.g., MLflow) for model tracking, versioning, and management. Design, build as well as optimize applications containerization and orchestration with Docker and Kubernetes and cloud platforms like AWS or Azure Automation & Monitoring Automating pipelines using understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Implement model monitoring and alerting systems to track model performance, accuracy, and data drift in production environments. Collaboration and Communication Work closely with data scientists to ensure that models are production-ready Collaborate with Data Engineering and Tech teams to ensure infrastructure is optimized for scaling ML applications. Optimization and Scaling Optimize ML pipelines for performance and cost-effectiveness Operational Excellence Help the Data teams leverage best practices to implement Enterprise level solutions. Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Helping to define common coding standards and model monitoring performance best practices Continuously evaluate the latest packages and frameworks in the ML ecosystem Build automated model deployment data engineering pipelines from plain Python/PySpark mode Stakeholder Engagement Collaborate with Data Scientists, Data Engineers, cloud platform and application engineers to create and implement cloud policies and governance for ML model life cycle. Job Requirements Education & Relevant Experience Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) 1-2 years of relevant working experience in MLOps Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Knowledge of core computer science concepts such as common data structures and algorithms, OOPs Programming languages (R, Python, PySpark, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Exposure to ETL tools and version controlling Experience in building and maintaining CI/CD pipelines for ML models. Understanding of machine-learning, information retrieval or recommendation systems Familiarity with DevOps tools (Docker, Kubernetes, Jenkins, GitLab). #LI-DS1
Posted 1 month ago
3.0 years
3 - 6 Lacs
Jaipur
On-site
Job Summary We’re seeking a hands-on GenAI & Computer Vision Engineer with 3–5 years of experience delivering production-grade AI solutions. You must be fluent in the core libraries, tools, and cloud services listed below, and able to own end-to-end model development—from research and fine-tuning through deployment, monitoring, and iteration. In this role, you’ll tackle domain-specific challenges like LLM hallucinations, vector search scalability, real-time inference constraints, and concept drift in vision models. Key Responsibilities Generative AI & LLM Engineering Fine-tune and evaluate LLMs (Hugging Face Transformers, Ollama, LLaMA) for specialized tasks Deploy high-throughput inference pipelines using vLLM or Triton Inference Server Design agent-based workflows with LangChain or LangGraph, integrating vector databases (Pinecone, Weaviate) for retrieval-augmented generation Build scalable inference APIs with FastAPI or Flask, managing batching, concurrency, and rate-limiting Computer Vision Development Develop and optimize CV models (YOLOv8, Mask R-CNN, ResNet, EfficientNet, ByteTrack) for detection, segmentation, classification, and tracking Implement real-time pipelines using NVIDIA DeepStream or OpenCV (cv2); optimize with TensorRT or ONNX Runtime for edge and cloud deployments Handle data challenges—augmentation, domain adaptation, semi-supervised learning—and mitigate model drift in production MLOps & Deployment Containerize models and services with Docker; orchestrate with Kubernetes (KServe) or AWS SageMaker Pipelines Implement CI/CD for model/version management (MLflow, DVC), automated testing, and performance monitoring (Prometheus + Grafana) Manage scalability and cost by leveraging cloud autoscaling on AWS (EC2/EKS), GCP (Vertex AI), or Azure ML (AKS) Cross-Functional Collaboration Define SLAs for latency, accuracy, and throughput alongside product and DevOps teams Evangelize best practices in prompt engineering, model governance, data privacy, and interpretability Mentor junior engineers on reproducible research, code reviews, and end-to-end AI delivery Required Qualifications You must be proficient in at least one tool from each category below: LLM Frameworks & Tooling: Hugging Face Transformers, Ollama, vLLM, or LLaMA Agent & Retrieval Tools: LangChain or LangGraph; RAG with Pinecone, Weaviate, or Milvus Inference Serving: Triton Inference Server; FastAPI or Flask Computer Vision Frameworks & Libraries: PyTorch or TensorFlow; OpenCV (cv2) or NVIDIA DeepStream Model Optimization: TensorRT; ONNX Runtime; Torch-TensorRT MLOps & Versioning: Docker and Kubernetes (KServe, SageMaker); MLflow or DVC Monitoring & Observability: Prometheus; Grafana Cloud Platforms: AWS (SageMaker, EC2/EKS) or GCP (Vertex AI, AI Platform) or Azure ML (AKS, ML Studio) Programming Languages: Python (required); C++ or Go (preferred) Additionally: Bachelor’s or Master’s in Computer Science, Electrical Engineering, AI/ML, or a related field 3–5 years of professional experience shipping both generative and vision-based AI models in production Strong problem-solving mindset; ability to debug issues like LLM drift, vector index staleness, and model degradation Excellent verbal and written communication skills Typical Domain Challenges You’ll Solve LLM Hallucination & Safety: Implement grounding, filtering, and classifier layers to reduce false or unsafe outputs Vector DB Scaling: Maintain low-latency, high-throughput similarity search as embeddings grow to millions Inference Latency: Balance batch sizing and concurrency to meet real-time SLAs on cloud and edge hardware Concept & Data Drift: Automate drift detection and retraining triggers in vision and language pipelines Multi-Modal Coordination: Seamlessly orchestrate data flow between vision models and LLM agents in complex workflows About Company Hi there! We are Auriga IT. We power businesses across the globe through digital experiences, data and insights. From the apps we design to the platforms we engineer, we're driven by an ambition to create world-class digital solutions and make an impact. Our team has been part of building the solutions for the likes of Zomato, Yes Bank, Tata Motors, Amazon, Snapdeal, Ola, Practo, Vodafone, Meesho, Volkswagen, Droom and many more. We are a group of people who just could not leave our college-life behind and the inception of Auriga was solely based on a desire to keep working together with friends and enjoying the extended college life. Who Has not Dreamt of Working with Friends for a Lifetime Come Join In https://www.aurigait.com/ -https://aurigait.com/https://aurigait.com
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Hi connections!! we are hiring for ML Engineer Senior Machine Learning Engineer Platform Team • 5+ years • Full-Time • Bangalore --- Role Overview We are seeking a Senior Machine Learning Engineer with strong expertise in Python, EDA, ML model development, and hands-on experience in LLMOps using frameworks like LangChain and LangGraph. You will be instrumental in building and operationalizing scalable ML solutions on our AI-driven platform. Location Bangalore (preferred) | Hybrid options may be considered. Experience 5+ years in Machine Learning, Data Science, or related fields. Key Responsibilities Design and develop ML models using supervised and unsupervised learning techniques for real-world business applications. Build, deploy, and manage end-to-end ML pipelines. Create robust data processing pipelines and APIs (preferably using FastAPI). Manage deployment and monitoring of ML/DL models, ensuring model reproducibility, concept/data drift monitoring, and versioning of code, model, and data. Handle data wrangling and feature engineering for both text and image datasets using libraries like Pandas, NumPy, OpenCV, PIL, Spacy, and Hugging Face Transformers. Utilize ML frameworks such as Scikit-learn, PyTorch, TensorFlow, or Keras. Work with LLMOps frameworks like LangChain, LangGraph, or similar to operationalize large language models (LLMs). Implement prompt engineering and fine-tuning techniques for LLMs, including using APIs such as OpenAI, Mistral, Nova. Collaborate closely with DevOps to integrate ML workflows with CI/CD pipelines, Docker, Kubernetes, and cloud platforms like AWS or GCP or Azure. Contribute to scaling and maintaining ML systems leveraging MLflow, GitHub Actions, and monitoring stacks like ELK. Preferred Skills Deep understanding of cloud-native ML practices and MLOps tools. Strong programming and statistical analysis skills. Hands-on experience with deep learning models and computer vision tasks. Strong verbal and written communication skills. Highly developed attention to detail and structured problem-solving abilities. Ability to thrive in collaborative team environments. Strong presentation skills to articulate technical solutions to non-technical stakeholders. Good to Have Exposure to other LLM ecosystems (e.g., HuggingFace, Open Source LLMs). Contributions to open-source ML/LLM frameworks. Knowledge of advanced topics like Reinforcement Learning, GANs, or Diffusion Models. If you are interested kindly revert on supriya.kataram@codersbrain.com #MachineLearning hashtag #MLJobs hashtag #LLMOps hashtag #LangChain hashtag #LangGraph hashtag #DataScience hashtag #AIJobs hashtag #SeniorMLEngineer hashtag #PythonDevelopers hashtag #FastAPI hashtag #MLOps hashtag #DeepLearning hashtag #NLP hashtag #OpenAI hashtag #HuggingFace hashtag #BangaloreJobs hashtag #NowHiring hashtag #TechCareers hashtag #JoinOurTeam
Posted 1 month ago
0.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka
Remote
Location: Bangalore - Karnataka, India - EOIZ Industrial Area Job Family: Engineering Worker Type Reference: Regular - Permanent Pay Rate Type: Salary Career Level: T3(B) Job ID: R-44637-2025 Description & Requirements Job Description Introduction: A Career at HARMAN Digital Transformation Solutions (DTS) We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN DTS, you solve challenges by creating innovative solutions. Combine the physical and digital, making technology a more dynamic force to solve challenges and serve humanity’s needs Java Microservices Java Developer with experience in microservices deployment, automation, and system lifecycle management(security, and infrastructure management) Required Skills: Java, hibernate, SAML/OpenSAML REST APIs Docker PostgreSQL (PSQL) Familiar with git hub workflow. Good to Have: Go (for automation and bootstrapping) RAFT Consensus Algorithm HashiCorp Vault Key Responsibilities: Service Configuration & Automation: Configure and bootstrap services using the Go CLI. Develop and maintain Go workflow templates for automating Java-based microservices. Deployment & Upgrade Management: Manage service upgrade workflows and apply Docker-based patches. Implement and manage OS-level patches as part of the system lifecycle. Enable controlled deployments and rollbacks to minimize downtime. Network & Security Configuration: Configure and update FQDN, proxy settings, and SSL/TLS certificates. Set up and manage syslog servers for logging and monitoring. Manage appliance users, including root and SSH users, ensuring security compliance. Scalability & Performance Optimization: Implement scale-up and scale-down mechanisms for resource optimization. Ensure high availability and performance through efficient resource management. Lifecycle & Workflow Automation: Develop automated workflows to support service deployment, patching, and rollback. Ensure end-to-end lifecycle management of services and infrastructure. What You Will Do To perform in-depth analysis of data and machine learning models to identify insights and areas of improvement. Develop and implement models using both classical machine learning techniques and modern deep learning approaches. Deploy machine learning models into production, ensuring robust MLOps practices including CI/CD pipelines, model monitoring, and drift detection. Conduct fine-tuning and integrate Large Language Models (LLMs) to meet specific business or product requirements. Optimize models for performance and latency, including the implementation of caching strategies where appropriate. Collaborate cross-functionally with data scientists, engineers, and product teams to deliver end-to-end ML solutions. What You Need to Be Successful Utilized various statistical techniques to derive important insights and trends. Proven experience in machine learning model development and analysis using classical and neural networks based approaches. Strong understanding of LLM architecture, usage, and fine-tuning techniques. Solid understanding of statistics, data preprocessing, and feature engineering. Proficient in Python and popular ML libraries (scikit-learn, PyTorch, TensorFlow, etc.). Strong debugging and optimization skills for both training and inference pipelines. Familiarity with data formats and processing tools (Pandas, Spark, Dask). Experience working with transformer-based models (e.g., BERT, GPT) and Hugging Face ecosystem. Bonus Points if You Have Experience with MLOps tools (e.g., MLflow, Kubeflow, SageMaker, or similar). Experience with monitoring tools (Prometheus, Grafana, or custom solutions for ML metrics). Familiarity with cloud platforms (Sagemaker, AWS, GCP, Azure) and containerization (Docker, Kubernetes). Hands-on experience with MLOps practices and tools for deployment, monitoring, and drift detection. Exposure to distributed training and model parallelism techniques. Prior experience in AB testing ML models in production. What Makes You Eligible Bachelor’s or master’s degree in computer science, Artificial Intelligence, or a related field. 5-10 years relevant and Proven experience in developing and deploying generative AI models and agents in a professional setting. What We Offer Flexible work environment, allowing for full-time remote work globally for positions that can be performed outside a HARMAN or customer location Access to employee discounts on world-class Harman and Samsung products (JBL, HARMAN Kardon, AKG, etc.) Extensive training opportunities through our own HARMAN University Competitive wellness benefits Tuition reimbursement “Be Brilliant” employee recognition and rewards program An inclusive and diverse work environment that fosters and encourages professional and personal development You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today ! HARMAN is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or Protected Veterans status. HARMAN offers a great work environment, challeng Important Notice: Recruitment Scams Please be aware that HARMAN recruiters will always communicate with you from an '@harman.com' email address. We will never ask for payments, banking, credit card, personal financial information or access to your LinkedIn/email account during the screening, interview, or recruitment process. If you are asked for such information or receive communication from an email address not ending in '@harman.com' about a job with HARMAN, please cease communication immediately and report the incident to us through: harmancareers@harman.com. HARMAN is proud to be an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.
Posted 1 month ago
1.0 - 2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description Alimentation Couche-Tard Inc., (ACT) is a global Fortune 200 company and a leader in the convenience store and fuel space with over 16,700 stores. It has footprints across 31 countries and territories. Circle K India Data & Analytics team is an integral part of ACT’s Global Data & Analytics Team, and the Associate ML Ops Analyst will be a key player on this team that will help grow analytics globally at ACT. The hired candidate will partner with multiple departments, including Global Marketing, Merchandising, Global Technology, and Business Units. About The Role The incumbent will be responsible for implementing Azure data services to deliver scalable and sustainable solutions, build model deployment and monitor pipelines to meet business needs. Roles & Responsibilities Development and Integration Collaborate with data scientists to deploy ML models into production environments Implement and maintain CI/CD pipelines for machine learning workflows Use version control tools (e.g., Git) and ML lifecycle management tools (e.g., MLflow) for model tracking, versioning, and management. Design, build as well as optimize applications containerization and orchestration with Docker and Kubernetes and cloud platforms like AWS or Azure Automation & Monitoring Automating pipelines using understanding of Apache Spark and ETL tools like Informatica PowerCenter, Informatica BDM or DEI, Stream Sets and Apache Airflow Implement model monitoring and alerting systems to track model performance, accuracy, and data drift in production environments. Collaboration and Communication Work closely with data scientists to ensure that models are production-ready Collaborate with Data Engineering and Tech teams to ensure infrastructure is optimized for scaling ML applications. Optimization and Scaling Optimize ML pipelines for performance and cost-effectiveness Operational Excellence Help the Data teams leverage best practices to implement Enterprise level solutions. Follow industry standards in coding solutions and follow programming life cycle to ensure standard practices across the project Helping to define common coding standards and model monitoring performance best practices Continuously evaluate the latest packages and frameworks in the ML ecosystem Build automated model deployment data engineering pipelines from plain Python/PySpark mode Stakeholder Engagement Collaborate with Data Scientists, Data Engineers, cloud platform and application engineers to create and implement cloud policies and governance for ML model life cycle. Job Requirements Education & Relevant Experience Bachelor’s degree required, preferably with a quantitative focus (Statistics, Business Analytics, Data Science, Math, Economics, etc.) Master’s degree preferred (MBA/MS Computer Science/M.Tech Computer Science, etc.) 1-2 years of relevant working experience in MLOps Behavioural Skills Delivery Excellence Business disposition Social intelligence Innovation and agility Knowledge Knowledge of core computer science concepts such as common data structures and algorithms, OOPs Programming languages (R, Python, PySpark, etc.) Big data technologies & framework (AWS, Azure, GCP, Hadoop, Spark, etc.) Enterprise reporting systems, relational (MySQL, Microsoft SQL Server etc.), non-relational (MongoDB, DynamoDB) database management systems and Data Engineering tools Exposure to ETL tools and version controlling Experience in building and maintaining CI/CD pipelines for ML models. Understanding of machine-learning, information retrieval or recommendation systems Familiarity with DevOps tools (Docker, Kubernetes, Jenkins, GitLab).
Posted 1 month ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the Role: We are looking for a forward-thinking LLMOps Engineer to join our team and help build the next generation of secure, scalable, and responsible Generative AI (GenAI) platforms. This role will focus on establishing governance, security, and operational best practices while enabling development teams to build high-performing GenAI applications. You will also work closely with GenAI agents and integrate LLMs from multiple providers to support diverse use cases. Key Responsibilities: Design and implement governance frameworks for GenAI platforms, ensuring compliance with internal policies and external regulations (e.g., GDPR, AI Act). Define and enforce responsible AI practices including fairness, transparency, explainability, and auditability. Implement robust security protocols including IAM, data encryption, secure API access, and model sandboxing. Collaborate with security teams to conduct risk assessments and ensure secure deployment of LLMs. Build and maintain scalable LLMOps pipelines for model training, fine-tuning, evaluation, deployment, and monitoring. Automate model lifecycle management with CI/CD, versioning, rollback, and observability. Develop and manage GenAI agents capable of reasoning, planning, and tool use. Integrate and orchestrate LLMs from multiple providers (e.g., OpenAI, Anthropic, Cohere, Google, Azure OpenAI) to support hybrid and fallback strategies. Optimize prompt engineering, context management, and agent memory for production use. Ensure high availability, low latency, and cost-efficiency of GenAI workloads across cloud and hybrid environments. Implement monitoring and alerting for model drift, hallucinations, and performance degradation. Partner with GenAI developers to embed best practices and reusable components (SDKs, templates, APIs). Provide technical guidance and documentation to accelerate development and ensure platform consistency. Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. 4+ years of experience in MLOps, DevOps, or platform engineering, with 1–2 years in LLM/GenAI environments. Deep understanding of LLMs, GenAI agents, prompt engineering, and inference optimization. Experience with LangChain, LlamaIndex, Langraph or similar agent frameworks. Hands-on with MLflow, or equivalent tools. Proficient in Python, containerization (Docker) and cloud platforms (AWS/GCP/Azure). Familiarity with AI governance frameworks and responsible AI principles. Experience with vector databases (e.g., FAISS, Pinecone), RAG pipelines, and model evaluation frameworks. Knowledge of Responsible AI, red-teaming, and OWASP security priciples.
Posted 1 month ago
2.5 - 5.0 years
5 - 11 Lacs
India
On-site
We are looking for an experienced AI Engineer to join our team. The ideal candidate will have a strong background in designing, deploying, and maintaining advanced AI/ML models with expertise in Natural Language Processing (NLP), Computer Vision, and architectures like Transformers and Diffusion Models. You will play a key role in developing AI-powered solutions, optimizing performance, and deploying and managing models in production environments. Key Responsibilities AI Model Development and Optimization: Design, train, and fine-tune AI models for NLP, Computer Vision, and other domains using frameworks like TensorFlow and PyTorch. Work on advanced architectures, including Transformer-based models (e.g., BERT, GPT, T5) for NLP tasks and CNN-based models (e.g., YOLO, VGG, ResNet) for Computer Vision applications. Utilize techniques like PEFT (Parameter-Efficient Fine-Tuning) and SFT (Supervised Fine-Tuning) to optimize models for specific tasks. Build and train RLHF (Reinforcement Learning with Human Feedback) and RL-based models to align AI behavior with real-world objectives., Explore multimodal AI solutions combining text, vision, and audio using generative deep learning architectures. Natural Language Processing (NLP): Develop and deploy NLP solutions, including language models, text generation, sentiment analysis, and text-to-speech systems. Leverage advanced Transformer architectures (e.g., BERT, GPT, T5) for NLP tasks. AI Model Deployment and Frameworks: Deploy AI models using frameworks like VLLM, Docker, and MLFlow in production-grade environments. Create robust data pipelines for training, testing, and inference workflows. Implement CI/CD pipelines for seamless integration and deployment of AI solutions. Production Environment Management: Deploy, monitor, and manage AI models in production, ensuring performance, reliability, and scalability. Set up monitoring systems using Prometheus to track metrics like latency, throughput, and model drift. Data Engineering and Pipelines: Design and implement efficient data pipelines for preprocessing, cleaning, and transformation of large datasets. Integrate with cloud-based data storage and retrieval systems for seamless AI workflows. Performance Monitoring and Optimization: Optimize AI model performance through hyperparameter tuning and algorithmic improvements. Monitor performance using tools like Prometheus, tracking key metrics (e.g., latency, accuracy, model drift, error rates etc.) Solution Design and Architecture: Collaborate with cross-functional teams to understand business requirements and translate them into scalable, efficient AI/ML solutions. Design end-to-end AI systems, including data pipelines, model training workflows, and deployment architectures, ensuring alignment with business objectives and technical constraints. Conduct feasibility studies and proof-of-concepts (PoCs) for emerging technologies to evaluate their applicability to specific use cases. Stakeholder Engagement: Act as the technical point of contact for AI/ML projects, managing expectations and aligning deliverables with timelines. Participate in workshops, demos, and client discussions to showcase AI capabilities and align solutions with client needs. Experience: 2.5 - 5 years of experience Salary : 5-11 LPA Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,100,000.00 per year Schedule: Day shift Work Location: In person
Posted 1 month ago
1.0 - 7.0 years
0 Lacs
India
On-site
Job description: Prospera Soft (www.prosperasoft.com) is looking for an experienced business developer/ lead generation specialist. Apply only if you have experience in custom software development company for overseas market. Prospera Soft is a software development company based out in Kharadi Pune. It works on cutting edge technologies and is looking to expand the lead generation initiative. Prospera Soft majorly focus on overseas projects and hence looking for a lead generation specialist to bring new overseas clients on board. Inside Sales, Executive, BDM, IT sales Good if have experience with tools like Drift, HubSpot, Discover org, zoom info, Slintel, LinkedIn Sales Navigator. Roles & Responsibility: 1. Generate leads via various channel like Upwork, Guru, Linked In etc. 2. Pitch with Prospera Soft wide portfolio of work and discover leads 3. Maintain lead data 4. Bid for work on various portals 5. Connect with people on Linked In 6. Generate reports for visibility to stake holders. Skills Needed: 1. 1 to 7 years of experience in IT services sales 2. Fluent English speaking skills 3.Proven sales track record 4. Understanding of various technology stacks 5. Proven working experience as a business development manager, sales executive or a relevant role. Job Type: Full-time Pay: Up to ₹1,500,000.00 per month Benefits: Paid time off Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Application Question(s): Are you an immediate joiner? Do you have experience in Business Development - IT Services Sales? Language: Hindi (Preferred) Work Location: In person
Posted 1 month ago
5.0 years
5 - 9 Lacs
Bengaluru
On-site
Company Description Version 1 are a true global leader in business transformation. For nearly three decades, we have been strategically partnering with customers to go beyond expectations through the power of cutting-edge technology and expert teams. Our deep expertise in cloud, data and AI, application modernisation, and service delivery management has redefined businesses globally, helping shape the future for large public sector organisations and major global, private brands. We put users and user-centric design at the heart of everything we do, enabling our customers to exceed expectations for their customers. Our approach is underpinned by the Version 1 Strength in Balance model – a balanced focus across our customers, our people and a strong organisation. This model is guided by core values that are embedded in every aspect of what we do. Our customers’ need for transformation is our driving force. We enable them to accelerate their journey to their digital future with our deep expertise and innovative approach. Our global technology partners – Oracle, Microsoft, AWS, Red Hat, and Snowflake – help us tackle any challenge by leveraging a technology-driven approach. Our people unlock our potential. They immerse themselves into the world of our customers to truly understand the unique challenges they face. Our teams, made up of highly skilled, passionate individuals, act with agility and integrity. We continually invest in their development and foster a culture that encourages collaboration and innovation. This is a reflection of our Strength in Balance model, which emphasises a balanced focus on our customers, our people, and a strong organisation. Through our comprehensive range of Managed Service offerings, we take ownership of the tasks that distract Customers from what really matters; driving their business objectives and strategic initiatives. We enable them to save time, and reduce costs and risk, by continually improving your technology estates, ensuring they drive value for their business. Go beyond simply ‘keeping the lights on’ and embrace the potential of our ASPIRE Managed Services that place AI, continuous improvement and business innovation at the heart of everything we do. From operational maintenance through to optimisation, we are trusted managed service experts with a sustainable, value-led approach and a wealth of industry sector expertise and experience. Job Description Onsite role, India Delivery Centre / Belfast / Dublin Full time position, 3-5 days per week in office (not shift) Department: ASPIRE Managed Services Practice: Services Reliability Group Vetting Requirements: N/A Role Summary: Our ASPIRE Global Service Centre is the central hub of our Service Management operations. Beyond a traditional Service Desk, it stands as the central authority and shared service delivery hub, orchestrating all operational workflows, processes, procedures, and tooling. It’s a core delivery component of the Version 1 ASPIRE Managed Services offering that places AI, continuous improvement and business innovation at the heart of everything Version 1 does. With a focus on supporting self-service and automation, we utilise the best digital capabilities of the ServiceNow ITSM tooling product to provide the very best Experience to our Customers. We are seeking an experienced and results-driven AI and Automation Lead who will be responsible for driving the strategic implementation and operational excellence of automation and artificial intelligence initiatives for ASPIRE Managed Services. This role leads the identification, design, and deployment of intelligent automation solutions to improve operational efficiency and productivity, enhance decision making, scale operations and deliver a competitive advantage in the market. Key Responsibilities: Develop and execute the ASPIRE Managed Services automation and AI strategy aligned with SRG and EA Practice goals Identify opportunities for AI and automation across all Managed Service functions, tooling and processes Champion a culture of innovation and continuous improvement through emerging technologies Lead end-to-end delivery of automation and AI projects, including planning, development, testing, deployment, and monitoring Establish governance frameworks and best practices for AI and automation initiatives Oversee the design and implementation of AI models, RPA (Robotic Process Automation), and intelligent workflows Ensure solutions are scalable, secure, and compliant with data privacy and ethical standards Evaluate and select appropriate tools, platforms, and vendors Collaborate with business units to understand pain points and co-create solutions Communicate complex technical concepts to non-technical stakeholders Monitor performance and continuously optimise solutions. Delivery of measurable business value through automation and AI Development of internal capabilities and knowledge sharing across teams Qualifications Skills, Education & Qualifications: Proven experience (5 years +) leading automation and AI projects in a complex, multi-client or enterprise-scale managed services environment, with demonstrable delivery of measurable business outcomes Strong technical expertise in Artificial Intelligence and Machine Learning, including: Supervised/unsupervised learning, deep learning, and natural language processing (NLP) Model development using frameworks such as TensorFlow, PyTorch, or scikit-learn Experience deploying AI models in production environments using MLOps principles (e.g., MLflow, Azure ML, SageMaker). Hands-on experience with automation and orchestration technologies, such as: Robotic Process Automation (RPA) platforms: UiPath, Blue Prism, Automation Anywhere IT process automation (ITPA) tools: ServiceNow Workflow/Orchestration, Microsoft Power Automate, Ansible, Terraform Integration using APIs and event-driven architectures (e.g., Kafka, Azure Event Grid) Proficiency in cloud-native AI and automation services in one of or more of public cloud platforms: Azure (Cognitive Services, Synapse, Logic Apps, Azure OpenAI) AWS (SageMaker, Lambda, Textract, Step Functions) GCP (Vertex AI, AutoML, Cloud Functions) Strong project delivery experience using modern methodologies: Agile/Scrum and DevOps for iterative development and deployment CI/CD pipeline integration for automation and ML model lifecycle management Use of tools like Git, Jenkins, and Azure DevOps In-depth knowledge of data architecture, governance, and AI ethics, including: Data privacy and security principles (e.g., GDPR, ISO 27001) Responsible AI practices: bias detection, explainability (e.g., SHAP, LIME), model drift monitoring Excellent stakeholder engagement and communication skills, with the ability to: Translate complex AI and automation concepts into business value Influence cross-functional teams and executive leadership Promote a culture of innovation, experimentation, and continuous learning Excellent leadership and team management skills Strong communication, interpersonal, and problem-solving abilities Strategic thinking and decision-making Adaptability to evolving technologies and processes Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent experience Additional Information At Version 1, we believe in providing our employees with a comprehensive benefits package that prioritises their well-being, professional growth, and financial stability. One of our standout advantages is the ability to work with a hybrid schedule along with business travel, allowing our employees to strike a balance between work and life. We prioritise the health and safety of our employees, providing private medical and life insurance coverage, as well as free eye tests and contributions towards glasses. Our team members can also stay ahead of the curve with incentivized certifications and accreditations, including AWS, Microsoft, Oracle, and Red Hat. Our employee-designed Profit Share scheme divides a portion of our company's profits each quarter amongst employees. We are dedicated to helping our employees reach their full potential, offering Pathways Career Development Quarterly, a programme designed to support professional growth.
Posted 1 month ago
5.0 years
4 - 6 Lacs
Bengaluru
On-site
Job Title: Senior AI Engineer Location: Bengaluru, India - (Hybrid) At Reltio®, we believe data should fuel business success. Reltio's AI-powered data unification and management capabilities—encompassing entity resolution, multi-domain master data management (MDM), and data products—transform siloed data from disparate sources into unified, trusted, and interoperable data. Reltio Data Cloud™ delivers interoperable data where and when it's needed, empowering data and analytics leaders with unparalleled business responsiveness. Leading enterprise brands—across multiple industries around the globe—rely on our award-winning data unification and cloud-native MDM capabilities to improve efficiency, manage risk and drive growth. At Reltio, our values guide everything we do. With an unyielding commitment to prioritizing our "Customer First", we strive to ensure their success. We embrace our differences and are "Better Together" as One Reltio. We are always looking to "Simplify and Share" our knowledge when we collaborate to remove obstacles for each other. We hold ourselves accountable for our actions and outcomes and strive for excellence. We "Own It". Every day, we innovate and evolve, so that today is "Always Better Than Yesterday". If you share and embody these values, we invite you to join our team at Reltio and contribute to our mission of excellence. Reltio has earned numerous awards and top rankings for our technology, our culture and our people. Reltio was founded on a distributed workforce and offers flexible work arrangements to help our people manage their personal and professional lives. If you're ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to enable digital transformation with connected data, let's talk! Job Summary: As a Senior AI Engineer at Reltio, you will be a core part of the team responsible for building intelligent systems that enhance data quality, automate decision-making, and drive entity resolution at scale. You will work with cross-functional teams to design and deploy advanced AI/ML solutions that are production-ready, scalable, and embedded into our flagship data platform. This is a high-impact engineering role with exposure to cutting-edge problems in entity resolution, deduplication, identity stitching, record linking, and metadata enrichment . Job Duties and Responsibilities: Design, implement, and optimize state-of-the-art AI/ML models for solving real-world data management challenges such as entity resolution, classification, similarity matching, and anomaly detection. Work with structured, semi-structured, and unstructured data to extract signals and engineer intelligent features for large-scale ML pipelines. Develop scalable ML workflows using Spark, MLlib, PyTorch, TensorFlow, or MLFlow , with seamless integration into production systems. Translate business needs into technical design and collaborate with data scientists, product managers, and platform engineers to operationalize models. Continuously monitor and improve model performance using feedback loops, A/B testing, drift detection, and retraining strategies. Conduct deep dives into customer data challenges and apply innovative machine learning algorithms to address accuracy, speed, and bias. Actively contribute to research and experimentation efforts, staying updated with latest AI trends in graph learning, NLP, probabilistic modeling , etc. Document designs and present outcomes to both technical and non-technical stakeholders , fostering transparency and knowledge sharing. Skills You Must Have: Bachelor's or Master's degree in Computer Science, Machine Learning, Artificial Intelligence , or related field. PhD is a plus. 5+ years of hands-on experience in developing and deploying machine learning models in production environments. Proficiency in Python (NumPy, scikit-learn, pandas, PyTorch/TensorFlow) and experience with large-scale data processing tools ( Spark, Kafka, Airflow ). Strong understanding of ML fundamentals , including classification, clustering, feature selection, hyperparameter tuning, and evaluation metrics. Demonstrated experience working with entity resolution, identity graphs, or data deduplication . Familiarity with containerized environments (Docker, Kubernetes) and cloud platforms (AWS, GCP, Azure) Strong debugging, analytical, and communication skills with a focus on delivery and impact. Attention to detail, ability to work independently, and a passion for staying updated with the latest advancements in the field of data science Skills Good to Have: Experience with knowledge graphs, graph-based ML, or embedding techniques . Exposure to deep learning applications in data quality, record matching, or information retrieval . Experience building explainable AI solutions in regulated domains. Prior work in SaaS, B2B enterprise platforms , or data infrastructure companies . Why Join Reltio?* Health & Wellness: Comprehensive Group medical insurance, including your parent,s with additional top-up options. Accidental Insurance Life insurance Free online unlimited doctor consultations An Employee Assistance Program (EAP) Work-Life Balance: 36 annual leaves, which include Sick leaves – 18, Earned Leaves - 18 26 weeks of maternity leave, 15 days of paternity leave Very unique to Reltio - 01 week of additional off as recharge week every year globally Support for home office setup: Home office setup allowance. Stay Connected, Work Flexibly: Mobile & Internet Reimbursement No need to pack a lunch—we've got you covered with a free meal. And many more….. Reltio is proud to be an equal opportunity workplace. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. Reltio is committed to working with and providing reasonable accommodation to applicants with physical and mental disabilities.
Posted 1 month ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Responsibilities: Evaluate and source appropriate cloud infrastructure solutions for machine learning needs, ensuring cost-effectiveness and scalability based on project requirements. Automate and manage the deployment of machine learning models into production environments, ensuring version control for models and datasets using tools like Docker and Kubernetes. Set up monitoring tools to track model performance and data drift, conduct regular maintenance, and implement updates for production models. Work closely with data scientists, software engineers, and stakeholders to align on project goals, facilitate knowledge sharing, and communicate findings and updates to cross-functional teams. Design, implement, and maintain scalable ML infrastructure, optimizing cloud and on-premise resources for training and inference. Document ML processes, pipelines, and best practices while preparing reports on model performance, resource utilization, and system issues. Provide training and support for team members on ML Ops tools and methodologies, and stay updated on industry trends and emerging technologies. Diagnose and resolve issues related to model performance, infrastructure, and data quality, implementing solutions to enhance model robustness and reliability. Education, Technical Skills & Other Critical Requirement: 6+ years of relevant experience in AI/ analytics product & solution delivery Bachelor’s/master’s degree in an information technology/computer science/ Engineering or equivalent fields experience. Proficiency in frameworks such as TensorFlow, PyTorch, or Scikit-learn. Strong skills in Python and/or R; familiarity with Java, Scala, or Go is a plus. Experience with cloud services such as AWS, Azure, or Google Cloud Platform, particularly in ML services (e.g., AWS SageMaker, Azure ML). CI/CD tools (e.g., Jenkins, GitLab CI), containerization (e.g., Docker), and orchestration (e.g., Kubernetes). Experience with databases (SQL and NoSQL), data pipelines, ETL processes, ML pipeline orchestration (Airflow) Familiarity with monitoring and logging tools such as Prometheus, Grafana, or ELK stack. Proficient in using Git for version control. Strong analytical and troubleshooting abilities to diagnose and resolve issues effectively. Good communication skills for working with cross-functional teams and conveying technical concepts to non-technical stakeholders. Ability to manage multiple projects and prioritize tasks in a fast-paced environment.
Posted 1 month ago
15.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Rapid7's AI Center of Excellence is seeking a Principal AI Engineer - MLOps to join our expanding team. We leverage AI/ML to enhance customer security and threat detection, working with technologies like AWS, EKS, Terraform, Python (numpy, pandas), Jupyter, and Sci-kit learn. This role is ideal for an experienced AI Engineer with a strong background in applied AI R&D, software engineering, and MLOps/DevOps. You'll be responsible for taking AI models from R&D to production, ensuring repeatable deployment, monitoring, and observability. Key Responsibilities: Architect and manage end-to-end ML production systems (scoping, data, modeling, deployment). Develop and maintain data pipelines, managing data lifecycle and quality. Implement ML guardrails and manage service monitoring. Develop and deploy accessible endpoints (web apps, REST APIs) with a focus on data privacy and security. Share expertise, mentor junior engineers, and foster collaboration. Embrace agile development practices for continuous iteration and problem-solving. Required Skills & Experience: 15+ years as a Software Engineer, with 3-5 years focused on ML deployment (especially AWS). Software Engineering: Strong Python, developing APIs with Flask or FastAPI. DevOps & MLOps: Designing and integrating scalable AI/ML systems into production, CI/CD, Docker, Kubernetes, cloud AI resource management. Pipelines, Monitoring, Observability: Data pre-processing, feature engineering, model monitoring, and evaluation. Growth mindset, strong communication skills, and proven ability to collaborate across teams. Track record of mentoring junior engineers. Advantageous Experience: Understanding of AI and ML operational frameworks and limitations. Deploying resources for LLM fine-tuning and experimentation. Implementing model risk management strategies (registries, drift monitoring, hyperparameter tuning). Rapid7 is committed to building a secure digital world and fosters a collaborative, innovative, and inclusive environment. We encourage diverse candidates to apply.
Posted 1 month ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We're looking for a passionate Machine Learning Engineer with a strong foundation in computer vision, model training and deployment, observability pipelines, and DevOps for AI systems. You’ll play a pivotal role in building scalable, accurate, and production-ready ML systems in a collaborative and distributed environment. Responsibilities Train, fine-tune, and evaluate object detection and OCR models (YOLO, GroundingDINO, PP-OCR, TensorFlow). Build and manage observability pipelines for evaluating model performance in production (accuracy tracking, drift analysis). Develop Python-based microservices and asynchronous APIs for ML model serving and orchestration. Package and deploy ML services using Docker and Kubernetes across distributed environments. Implement and manage distributed computing workflows with NATS messaging. Collaborate with DevOps to configure networking (VLANs, ingress rules, reverse proxies) and firewall access for scalable deployment. Use Bash scripting and Linux CLI tools (e.g., sed, awk) for automation and log parsing. Design modular, testable Python code using OOP and software packaging principles. Work with PostgreSQL, MongoDB, and TinyDB for structured and semi-structured data ingestion and persistence. Manage system processes using Python concurrency primitives (threading, multiprocessing, semaphores, etc.). Required Skills Languages: Python (OOP, async IO, modularity), Bash Computer Vision: OpenCV, Label Studio, YOLO, GroundingDINO, PP-OCR ML & MLOps: Training pipelines, evaluation metrics, observability tooling DevOps & Infra: Docker, Kubernetes, NATS, ingress & firewall configs Data: PostgreSQL, MongoDB, TinyDB Networking: Subnetting, VLANs, service access, reverse proxy setup Tools: sed, awk, firewalls, reverse proxies, Linux process control
Posted 1 month ago
18.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Computer Vision Good to have skills : NA Minimum 18 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: Accenture Center for Advance AI (CAAI) focuses on building scalable enterprise AI and Gen AI based solutions for various industries such as - Financial Services, Utilities, Energy, Manufacturing, Retail, Communications, Heath & Public Services, and Products. The Applied AI Research and Innovation Group, within CAAI, is focusing on foundational and applied research in the various fields of AI – Gen AI, Model building/training/finetuning, Agentic AI, etc. to develops offerings which can be utilized for complex and challenging client delivery projects. AI Research and Innovation Lead – Agentic AI (Experienced PhDs from Premium Institutes) - We are seeking a visionary AI Research and Innovation Lead to spearhead our efforts in Agentic AI—developing autonomous, goal-driven AI systems capable of reasoning, planning, and interacting in complex environments. This role is ideal for a thought leader passionate about pushing the boundaries of intelligent agents. Roles & Responsibilities: -Lead R&D in agentic architectures, autonomous agents, including planning, memory, tool use, and self-reflection, human agent collaboration, model/agent drift detection. -Design and prototype autonomous agents for real-world applications (e.g., enterprise automation, digital assistants, healthcare autonomous agents). -Collaborate with solutioning, delivery and engineering teams to integrate agentic capabilities into scalable systems across various industries, collaborate with Responsible AI team to ensure the polices are baked in the solutions, consulting team to provide technical depth in AI, academia and industry partners for pushing the research boundaries. -Publish and present novel and innovative research findings in top AI venues. -Develop a larger AI talent and foster a culture of continuous learning across practice and organization. Mentor a team of researchers and engineers. -Support client engagements by providing consulting and solutioning for complex client problems by bringing in niche AI skills and utilizing advanced internal and external frameworks and accelerators. Professional & Technical Skills: -A Ph.D. in Computer Science, Electrical Engineering, Electronics, Electronics and Communication, Data Science/AI, Mathematics and Statistics from premium institutes – IITs, IISc, IIIT and ISI -Overall experience 10-15 years and Relevant experience (post PhD) 5+ years. -Deep expertise in reinforcement learning, autonomous agents, and multi-agent systems. -Strong understanding of LLMs and their integration into agentic workflows. -Experience with frameworks like LangChain, AutoGPT, or similar. -Proven track record of innovation and publications. Additional Information: - The candidate should have minimum 18 years of experience in Computer Vision. - This position is based at our Bengaluru office. - A 15 years full time education is required.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France