Home
Jobs

275 Drift Jobs - Page 7

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Amta-I, West Bengal, India

On-site

Linkedin logo

Er du på udkig efter en praksisnær uddannelse, der giver mulighed for at udvikle dine salgs- og driftsevner inden for detail? Hos Silvan får du som salgselev en solid grunduddannelse inden for salg og service og bliver en del af et dynamisk team, hvor du kan tage din karriere til nye højder! Vi tilbyder Den officielle salgselevuddannelse med 4 x 2 ugers skoleophold på Business College Syd fordelt over 2 år med sociale aktiviteter under opholdene. En hverdag, hvor der er fokus på at udvikle dine kompetencer inden for salg, drift og kundeservice. 20% oveni minimumslønnen for salgselever på Butiksoverenskomsten. 2 x 1 uges lærerige rotationer i andre Silvan butikker for at få indblik i de forskellige koncepter. Afsluttende præsentation af fagprøve for ledelsen på hovedkontoret efterfulgt af fejring. Vores forventninger til dig Du har bestået en EUD (Detail), EUX, HHX eller en anden gymnasial eksamen*. Du kan starte den 1. august 2025. Du har høje ambitioner inden for detailbranchen. Du er en holdspiller der bidrager til fællesskabet. Du brænder for at skabe noget magisk for vores kunder. Har du ikke haft merkantile fag er det et krav fra uddannelsen, at du kan gennemføre 5 ugers EUS inden første skoleophold. Om salgselevuddannelsen Den 1. august 2025 starter du i en butik så tæt på din bopæl som muligt, medmindre andet ønskes. Du vil tilbringe de næste 2 år i butikken, hvor du vil blive klædt godt på indenfor salg af alle vores produkter samt den daglige drift af butikken. I løbet af de to år deltager du i fire skoleophold af to ugers varighed på Business College Syd i Mommark. Undervisningen er skabt til at give dig en dyb forståelse for detailhandlens mekanismer og forberede dig til de udfordringer, du møder i din dagligdag i butikken. Indenfor den første måned inviteres alle vores elever til en fælles onboarding-dag på hovedkontoret, så I har mødt hinanden inden første skoleophold. Efter fagprøven inviteres alle vores elever igen ind på hovedkontoret, hvor resultaterne af fagprøven præsenteres for ledelsen efterfulgt af en fejring. Læs mere om uddannelsen her! Din fremtid Vores mål er at uddanne dygtige salgstalenter som brænder for en karriere i Silvan. Udviser du engagement og ambitioner, er der ingen grænser for, hvad du kan opnå hos os efter endt uddannelse. Vi bestræber os altid på at fastholde vores færdiguddannede elever og fortsætte udviklingen. Interesseret? Vi screener løbende ansøgningerne, så tøv ikke med at sende dit CV og en kort ansøgning! Skriv gerne i din ansøgning, hvis der er flere butikker, du er interesseret i. Ansøgninger der ikke er sendt via linket, vil ikke blive behandlet grundet GDPR. Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderābād

On-site

Hyderabad, Telangana, India Category: Engineering Hire Type: Employee Job ID 9309 Date posted 02/24/2025 We Are: At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content. Join us to transform the future through continuous technological innovation. You Are: As a Senior Staff AI Engineer focusing on AI Optimization & MLOps, you are a trailblazer in the AI landscape. You possess deep expertise in AI model development and optimization, with a keen interest in reinforcement learning and MLOps. Your ability to design, fine-tune, and deploy scalable, efficient, and continuously improving AI models sets you apart. You thrive in dynamic environments, staying at the forefront of AI technologies and methodologies, ensuring that AI solutions are not only cutting-edge but also production-ready. Your collaborative spirit and excellent communication skills enable you to work seamlessly with cross-functional teams, enhancing AI-powered IT automation solutions. With a strong background in AI frameworks and cloud-based AI services, you are committed to driving innovation and excellence in AI deployments. What You’ll Be Doing: Design, fine-tune, and optimize LLMs, retrieval-augmented generation (RAG), and reinforcement learning models for IT automation. Improve model accuracy, latency, and efficiency, ensuring optimal performance for IT service workflows. Experiment with cutting-edge AI techniques, including multi-agent architectures, prompt tuning, and continual learning. Implement MLOps best practices, ensuring scalable, automated, and reliable model deployment. Develop AI monitoring, logging, and observability pipelines to track model performance in production. Optimize GPU/TPU utilization and cloud-based AI model serving for efficiency and cost-effectiveness. Develop tools to measure model drift, inference latency, and operational efficiency. Implement automated retraining pipelines to ensure AI models remain effective over time. Work closely with cloud teams to optimize AI model execution across hybrid cloud environments. Stay ahead of emerging AI technologies, evaluating new frameworks, techniques, and research for real-world application. Collaborate to refine AI system architectures and capabilities, while also ensuring models are effectively embedded into IT automation workflows The Impact You Will Have: Enhance the efficiency and reliability of AI-powered IT automation solutions. Drive continuous improvement and innovation in AI model development and deployment. Ensure scalable and cost-effective AI model serving in cloud and hybrid environments. Improve real-time AI processing with minimal downtime and high performance. Optimize AI systems for performance, security, and cost in IT automation applications. Contribute to the advancement of Synopsys' AI capabilities and technologies. What You’ll Need: 8+ years of experience in AI/ML engineering, with a focus on model optimization and deployment. Strong expertise in AI frameworks (LangGraph, OpenAI, Hugging Face, TensorFlow/PyTorch). Experience implementing MLOps pipelines, CI/CD for AI models, and cloud-based AI deployment. Deep understanding of AI performance tuning, inference optimization, and cost-efficient deployment. Strong programming skills in Python, AI model APIs, and cloud-based AI services. Familiarity with IT automation and self-healing systems is a plus. Who You Are: Innovative and forward-thinking, constantly seeking to improve and optimize AI models. Collaborative and communicative, working effectively with cross-functional teams. Detail-oriented and meticulous, ensuring high standards in AI model performance and deployment. Adaptable and resilient, thriving in dynamic and fast-paced environments. Passionate about AI and its applications in IT automation and beyond. The Team You’ll Be A Part Of: You will join a dynamic team of AI engineers and IT professionals dedicated to advancing AI-powered IT automation. Our team focuses on optimizing model deployment, scaling AI workloads using Kubernetes, and enhancing AI observability and security. Together, we aim to make IT automation faster, more reliable, and cost-efficient, driving continuous technological innovation and excellence. Rewards and Benefits: We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process. At Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Client- UK Based client Availability: 8 hours per day Shift: 2 PM IST to 11 PM IST Exp -10+ Yrs Mode: WFH ( Freelancing) If you're interested, kindly share your CV-thara.dhanaraj@excelenciaconsulting.com/ Call7358452333 Key Responsibilities: Design, build, and maintain ML infrastructure on GCP using tools such as Vertex AI, GKE, Dataflow, BigQuery, and Cloud Functions. Develop and automate ML pipelines for model training, validation, deployment, and monitoring using tools like Kubeflow Pipelines, TFX, or Vertex AI Pipelines. Work with Data Scientists to productionize ML models and support experimentation workflows. Implement model monitoring and alerting for drift, performance degradation, and data quality issues. Manage and scale containerized ML workloads using Kubernetes (GKE) and Docker. Set up CI/CD workflows for ML using tools like Cloud Build, Bitbucket, Jenkins, or similar. Ensure proper security, versioning, and compliance across the ML lifecycle. Maintain documentation, artifacts, and reusable templates for reproducibility and auditability. Having GCP MLE Certification is Plus Show more Show less

Posted 2 weeks ago

Apply

14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Zscaler Serving thousands of enterprise customers around the world including 40% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler. Our Engineering team built the world's largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. Responsibilities We're looking for an Architect SRE to be part of our SRE Platform and Tooling team. Reporting to the Director, Software Engineering, you'll be responsible for: Developing scalable, secure, and resilient SRE platform and tooling solutions to enhance reliability and performance across cloud, on-prem, and private cloud environments Deploying observability tools (OpenTelemetry, Kloudfuse, OpenSearch, Grafana, ServiceNow) to improve system visibility and reduce MTTD/MTTR Leading automation efforts in self-healing, CI/CD, configuration, drift, and infrastructure to boost efficiency What We're Looking For (Minimum Qualifications) 14+ years in software development across Cloud-SRE, DevOps, and System Engineering, specializing in Infrastructure, Observability, Automation, and CI/CD Expertise in AIOps, AI/ML for operational efficiency and scalability, and building large-scale distributed systems Proficient in observability tools and skilled in Kubernetes, container orchestration, and microservices architectures Proficient in programming and scripting with Java, Python, Go, or similar languages Skilled in OpenStack for private cloud, Kafka, RabbitMQ, event-driven architectures, and ServiceNow Platform, including CMDB and ITSM solutions What Will Make You Stand Out (Preferred Qualifications) Experience in regulated markets (FedRAMP, SOC2, ISO27001), Cyber Security and compliance-driven environments Familiarity with AI/ML-driven operational intelligence and AIOps platforms Contributions to open-source projects related to SRE and platform engineering At Zscaler, we are committed to building a team that reflects the communities we serve and the customers we work with. We foster an inclusive environment that values all backgrounds and perspectives, emphasizing collaboration and belonging. Join us in our mission to make doing business seamless and secure. Benefits Our Benefits program is one of the most important ways we support our employees. Zscaler proudly offers comprehensive and inclusive benefits to meet the diverse needs of our employees and their families throughout their life stages, including: Various health plans Time off plans for vacation and sick time Parental leave options Retirement options Education reimbursement In-office perks, and more! By applying for this role, you adhere to applicable laws, regulations, and Zscaler policies, including those related to security and privacy standards and guidelines. Zscaler is committed to providing equal employment opportunities to all individuals. We strive to create a workplace where employees are treated with respect and have the chance to succeed. All qualified applicants will be considered for employment without regard to race, color, religion, sex (including pregnancy or related medical conditions), age, national origin, sexual orientation, gender identity or expression, genetic information, disability status, protected veteran status, or any other characteristic protected by federal, state, or local laws. See more information by clicking on the Know Your Rights: Workplace Discrimination is Illegal link. Pay Transparency Zscaler complies with all applicable federal, state, and local pay transparency rules. Zscaler is committed to providing reasonable support (called accommodations or adjustments) in our recruiting processes for candidates who are differently abled, have long term conditions, mental health conditions or sincerely held religious beliefs, or who are neurodivergent or require pregnancy-related support. Show more Show less

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Head - Python Engineering Job Summary: We are looking for a skilled Python, AI/ML Developer with 8 to 12 years of experience to design, develop, and maintain high-quality back-end systems and applications. The ideal candidate will have expertise in Python and related frameworks, with a focus on building scalable, secure, and efficient software solutions. This role requires a strong problem-solving mindset, collaboration with cross-functional teams, and a commitment to delivering innovative solutions that meet business objectives. Responsibilities Application and Back-End Development: Design, implement, and maintain back-end systems and APIs using Python frameworks such as Django, Flask, or FastAPI, focusing on scalability, security, and efficiency. Build and integrate scalable RESTful APIs, ensuring seamless interaction between front-end systems and back-end services. Write modular, reusable, and testable code following Python’s PEP 8 coding standards and industry best practices. Develop and optimize robust database schemas for relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB), ensuring efficient data storage and retrieval. Leverage cloud platforms like AWS, Azure, or Google Cloud for deploying scalable back-end solutions. Implement caching mechanisms using tools like Redis or Memcached to optimize performance and reduce latency. AI/ML Development: Build, train, and deploy machine learning (ML) models for real-world applications, such as predictive analytics, anomaly detection, natural language processing (NLP), recommendation systems, and computer vision. Work with popular machine learning and AI libraries/frameworks, including TensorFlow, PyTorch, Keras, and scikit-learn, to design custom models tailored to business needs. Process, clean, and analyze large datasets using Python tools such as Pandas, NumPy, and PySpark to enable efficient data preparation and feature engineering. Develop and maintain pipelines for data preprocessing, model training, validation, and deployment using tools like MLflow, Apache Airflow, or Kubeflow. Deploy AI/ML models into production environments and expose them as RESTful or GraphQL APIs for integration with other services. Optimize machine learning models to reduce computational costs and ensure smooth operation in production systems. Collaborate with data scientists and analysts to validate models, assess their performance, and ensure their alignment with business objectives. Implement model monitoring and lifecycle management to maintain accuracy over time, addressing data drift and retraining models as necessary. Experiment with cutting-edge AI techniques such as deep learning, reinforcement learning, and generative models to identify innovative solutions for complex challenges. Ensure ethical AI practices, including transparency, bias mitigation, and fairness in deployed models. Performance Optimization and Debugging: Identify and resolve performance bottlenecks in applications and APIs to enhance efficiency. Use profiling tools to debug and optimize code for memory and speed improvements. Implement caching mechanisms to reduce latency and improve application responsiveness. Testing, Deployment, and Maintenance: Write and maintain unit tests, integration tests, and end-to-end tests using Pytest, Unittest, or Nose. Collaborate on setting up CI/CD pipelines to automate testing, building, and deployment processes. Deploy and manage applications in production environments with a focus on security, monitoring, and reliability. Monitor and troubleshoot live systems, ensuring uptime and responsiveness. Collaboration and Teamwork: Work closely with front-end developers, designers, and product managers to implement new features and resolve issues. Participate in Agile ceremonies, including sprint planning, stand-ups, and retrospectives, to ensure smooth project delivery. Provide mentorship and technical guidance to junior developers, promoting best practices and continuous improvement. Required Skills and Qualifications Technical Expertise: Strong proficiency in Python and its core libraries, with hands-on experience in frameworks such as Django, Flask, or FastAPI. Solid understanding of RESTful API development, integration, and optimization. Experience working with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB). Familiarity with containerization tools like Docker and orchestration platforms like Kubernetes. Expertise in using Git for version control and collaborating in distributed teams. Knowledge of CI/CD pipelines and tools like Jenkins, GitHub Actions, or CircleCI. Strong understanding of software development principles, including OOP, design patterns, and MVC architecture. Preferred Skills: Experience with asynchronous programming using libraries like asyncio, Celery, or RabbitMQ. Knowledge of data visualization tools (e.g., Matplotlib, Seaborn, Plotly) for generating insights. Exposure to machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is a plus. Familiarity with big data frameworks like Apache Spark or Hadoop. Experience with serverless architecture using AWS Lambda, Azure Functions, or Google Cloud Run. Soft Skills: Strong problem-solving abilities with a keen eye for detail and quality. Excellent communication skills to effectively collaborate with cross-functional teams. Adaptability to changing project requirements and emerging technologies. Self-motivated with a passion for continuous learning and innovation. Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Linkedin logo

Job Title: Azure DevOps Engineer Location: Pune Experience: 5-7 Years Job Description 5+ years of Platform Engineering, DevOps, or Cloud Infrastructure experience Platform Thinking: Strong understanding of platform engineering principles, developer experience, and self-service capabilities Azure Expertise: Advanced knowledge of Azure services including compute, networking, storage, and managed services Infrastructure as Code: Proficient in Terraform, ARM templates, or Azure Bicep with hands-on experience in large-scale deployments DevOps and Automation CI/CD Pipelines: Expert-level experience with Azure DevOps, GitHub Actions, or Jenkins Automation Scripting: Strong programming skills in Python, PowerShell, or Bash for automation and tooling Git Workflows: Advanced understanding of Git branching strategies, pull requests, and code review processes Cloud Architecture and Security Cloud Architecture: Deep understanding of cloud design patterns, microservices, and distributed systems Security Best Practices: Implementation of security scanning, compliance automation, and zero-trust principles Networking: Advanced Azure networking concepts including VNets, NSGs, Application Gateways, and hybrid connectivity Identity Management: Experience with Azure Active Directory, RBAC, and identity governance Monitoring and Observability Azure Monitor: Advanced experience with Azure Monitor, Log Analytics, and Application Insights Metrics and Alerting: Implementation of comprehensive monitoring strategies and incident response Logging Solutions: Experience with centralized logging and log analysis platforms Performance Optimization: Proactive performance monitoring and optimization techniques Roles And Responsibilities Platform Development and Management Design and build self-service platform capabilities that enable development teams to deploy and manage applications independently Create and maintain platform abstractions that simplify complex infrastructure for development teams Develop internal developer platforms (IDP) with standardized templates, workflows, and guardrails Implement platform-as-a-service (PaaS) solutions using Azure native services Establish platform standards, best practices, and governance frameworks Infrastructure as Code (IaC) Design and implement Infrastructure as Code solutions using Terraform, ARM templates, and Azure Bicep Create reusable infrastructure modules and templates for consistent environment provisioning Implement GitOps workflows for infrastructure deployment and management Maintain infrastructure state management and drift detection mechanisms Establish infrastructure testing and validation frameworks DevOps and CI/CD Build and maintain enterprise-grade CI/CD pipelines using Azure DevOps, GitHub Actions, or similar tools Implement automated testing strategies including infrastructure testing, security scanning, and compliance checks Create deployment strategies including blue-green, canary, and rolling deployments Establish branching strategies and release management processes Implement secrets management and secure deployment practices Platform Operations and Reliability Implement monitoring, logging, and observability solutions for platform services Establish SLAs and SLOs for platform services and developer experience metrics Create self-healing and auto-scaling capabilities for platform components Implement disaster recovery and business continuity strategies Maintain platform security posture and compliance requirements Preferred Qualifications Bachelor’s degree in computer science or a related field (or equivalent work experience) Show more Show less

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Every second, the internet gets messier. Content floods in from humans and machines alike—some helpful, some harmful, and most of it unstructured. Forums, blogs, knowledge bases, event pages, community threads: these are the lifeblood of digital platforms, but they also carry risk. Left unchecked, they can drift into chaos, compromise brand integrity, or expose users to misinformation and abuse. The scale is too big for humans alone, and AI isn’t good enough to do it alone—yet. That’s where we come in. Our team is rebuilding content integrity from the ground up by combining human judgment with generative AI. We don’t treat AI like a sidekick or a threat. Every moderator on our team works side-by-side with GenAI tools to classify, tag, escalate, and refine content decisions at speed. The edge cases you annotate and the feedback you give train smarter systems, reduce false positives, and make AI moderation meaningfully better with every cycle. This isn’t a job where you manually slog through a never-ending moderation queue. It’s not an outsourced content cop role. You’ll spend your days interacting directly with AI to make decisions, flag patterns, streamline workflows, and make sure the right content sees the light of day. If you’re the kind of person who thrives on structured work, enjoys hunting down ambiguity, and finds satisfaction in operational clarity, this job will feel like a control panel for the future of content quality. You’ll be joining a team obsessed with platform integrity and operational scale. Your job is to keep the machine running smoothly: managing queues, moderating edge cases, annotating training data, and making feedback loops tighter and faster. If you’ve used tools like ChatGPT to get real work done—not just writing poems or brainstorming ideas, but actually processing or classifying information—this is your next level. What You Will Be Doing Review and moderate user- and AI-generated content using GenAI tools to enforce platform policies and maintain a safe, high-quality environment Coordinate content workflows across tools and teams, ensuring timely processing, clear tracking, and smooth handoffs Tag edge cases, annotate training data, and provide structured feedback to improve the accuracy and performance of AI moderation systems What You Won’t Be Doing A boring content moderation job focused on manually reviewing of blogpost after blogpost An entry-level admin role with low agency or impact, just checking boxes in a queue Basic Requirements AI Content Reviewer key responsibilities At least 1 year of professional work experience Hands-on experience using GenAI tools (e.g., ChatGPT, Claude, Gemini) in a professional, academic, or personal productivity context Strong English writing skills About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $15 USD/hour, which equates to $30,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5593-IN-Hyderaba-AIContentRevie.002 Show more Show less

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

MACHINE-LEARNING ENGINEER ABOUT US Datacultr is a global Digital Operating System for Risk Management and Debt Recovery, we drive Collection Efficiencies, Reduce Delinquencies and Non-Performing Loans (NPL’s). Datacultr is a Digital-Only provider of Consumer Engagement, Recovery and Collection Solutions, helping Consumer Lending, Retail, Telecom and Fintech Organizations to expand and grow their business in the under-penetrated New to Credit and Thin File Segments. We are helping millions of new to credit consumers, across emerging markets, access formal credit and begin theirjourney towards financialhealth. We have clients acrossIndia, South Asia, South East Asia, Africa and LATAM. Datacultr is headquartered in Dubai, with offices in Abu Dhabi, Singapore, Ho Chi Minh City, Nairobi, and Mexico City; and our Development Center is located out of Gurugram, India. ORGANIZATION’S GROWTH PLAN Datacultr’s vision is to enable convenient financing opportunities for consumers, entrepreneurs and small merchants, helping them combat the Socio-economic problems this segment faces due to restricted access to financing. We are on a missionto enable 35 million unbanked& under-served people,access financial services by the end of 2026. Position Overview We’re looking for an experienced Machine Learning Engineer to design, deploy, and scale production-grade ML systems. You’ll work on high-impact projects involving deep learning, NLP, and real-time data processing—owning everything from model development to deployment and monitoring while collaborating with cross-functional teams to deliver impactful, production-ready solutions. Core Responsibilities Representation & Embedding Layer Evaluate, fine-tune, and deploy multilingual embedding models (e.g., OpenAI text-embedding-3, Sentence-T5, Cohere, or in-house MiniLM) on AWS GPU or serverless endpoints. Implement device-level aggregation to produce stable vectors for downstream clustering. Cohort Discovery Services Build scalable clustering workflows in Spark/Flink or Python on Airflow. Serve cluster IDs & metadata via feature store / real-time API for consumption. MLOps & Observability Own CI/CD for model training & deployment. Instrument latency, drift, bias, and cost dashboards; automate rollback policies. Experimentation & Optimisation Run A/B and multivariate tests comparing embedding cohorts against legacy segmentation; analyse lift in repayment or engagement. Iterate on quantisation, distillation, and batching to hit strict cost-latency SLAs. Collaboration & Knowledge-sharing Work hand-in-hand with Product & Data Strategy to translate cohort insights into actionable product features. Key Requirements 5–8 years of hands-on ML engineering / NLP experience; at least 2 years deploying transformer-based models in production. Demonstrated ownership of pipelines processing ≥100 million events per month. Deep proficiency in Python, PyTorch/TensorFlow, Hugging Face ecosystem, and SQL on cloud warehouses. Familiar with vector databases and RAG architectures. Working knowledge of credit-risk or high-volume messaging platforms is a plus. Degree in CS, EE, Statistics, or related; Tech Stack You’ll Drive Model & Serving – PyTorch, Hugging Face, Triton, BentoML Data & Orchestration – Airflow, Spark/Flink, Kafka Vector & Storage – Qdrant/Weaviate, S3/GCS, Parquet/Iceberg Cloud & Infra – AWS (EKS, SageMaker) Monitoring – Prometheus, Loki, Grafana What We Offer Opportunity to shape the future of unsecured lending in emerging markets Competitive compensation package Professional development and growth opportunities Collaborative, innovation-focused work environment Comprehensive health and wellness benefits Location & Work Model Immediate joining possible Work From Office only Based in Gurugram, Sector 65 Kindly share your updated profile with us at careers@datacultr.com to guide you further with this opportunity. ----- END ----- Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Job Title: AI/ML Developer (5 Years Experience) Location : Remote Job Type : Full-time Experience:5 Year Job Summary: We are looking for an experienced AI/ML Developer with at least 5 years of hands-on experience in designing, developing, and deploying machine learning models and AI-driven solutions. The ideal candidate should have strong knowledge of machine learning algorithms, data preprocessing, model evaluation, and experience with production-level ML pipelines. Key Responsibilities Model Development : Design, develop, train, and optimize machine learning and deep learning models for classification, regression, clustering, recommendation, NLP, or computer vision tasks. Data Engineering : Work with data scientists and engineers to preprocess, clean, and transform structured and unstructured datasets. ML Pipelines : Build and maintain scalable ML pipelines using tools such as MLflow, Kubeflow, Airflow, or SageMaker. Deployment : Deploy ML models into production using REST APIs, containers (Docker), or cloud services (AWS/GCP/Azure). Monitoring and Maintenance : Monitor model performance and implement retraining pipelines or drift detection techniques. Collaboration : Work cross-functionally with data scientists, software engineers, and product managers to integrate AI capabilities into applications. Research and Innovation : Stay current with the latest advancements in AI/ML and recommend new techniques or tools where applicable. Required Skills & Qualifications Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Minimum 5 years of experience in AI/ML development. Proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or LightGBM. Strong understanding of statistics, data structures, and ML/DL algorithms. Experience with cloud platforms (AWS/GCP/Azure) and deploying ML models in production. Experience with CI/CD tools and containerization (Docker, Kubernetes). Familiarity with SQL and NoSQL databases. Excellent problem-solving and communication skills. Preferred Qualifications Experience with NLP frameworks (e.g., Hugging Face Transformers, spaCy, NLTK). Knowledge of MLOps best practices and tools. Experience with version control systems like Git. Familiarity with big data technologies (Spark, Hadoop). Contributions to open-source AI/ML projects or publications in relevant fields. Show more Show less

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Position : Environmental Data Scientist Salary: upto ₹50000 pm Location: Ahmedabad [Only preferring candidates from Gujarat] Experience: 2+ Years we are seeking a research-driven Environmental Data Scientist to lead the development of advanced algorithms that enhance the accuracy, reliability, and performance of air quality sensor data. This role goes beyond traditional data science — it focuses on solving real-world challenges in environmental sensing, such as sensor drift, cross-interference, and data anomalies. Key Responsibilities: Design and implement algorithms to improve the accuracy, stability, and interpretability of air quality sensor data (e.g., calibration, anomaly detection, cross-interference mitigation, and signal correction) Conduct in-depth research on sensor behavior and environmental impact to inform algorithm development Collaborate with software and embedded systems teams to integrate these algorithms into cloud or edge-based systems Analyze large, complex environmental datasets using Python, R, or similar tools Continuously validate algorithm performance using lab and field data; iterate for improvement Develop tools and dashboards to visualize sensor behavior and algorithm impact Assist in environmental research projects with statistical analysis and data interpretation Document algorithm design, testing procedures, and research findings for internal use and knowledge sharing Support team members with data-driven insights and code-level contributions as needed Assist other team members with writing efficient code and overcoming programming challenges Education/Experience Required Skills & Qualifications Bachelor’s or Master’s degree in one of the following fields: Environmental Engineering / Science, Chemical Engineering, Electronics / Instrumentation Engineering, Computer Science / Data Science, Physics / Atmospheric Science (with data or sensing background) 1-2 years of hands-on experience working with sensor data or IoT-based environmental monitoring systems Strong knowledge of algorithm development, signal processing, and statistical analysis Proficiency in Python (pandas, NumPy, scikit-learn, etc.) or R, with experience handling real-world sensor datasets Ability to design and deploy models in a cloud or embedded environment. Excellent problem-solving and communication skills. Passion for environmental sustainability and clean-tech. Preferred Qualifications: Familiarity with time-series anomaly detection, sensor fusion, signal noise reduction techniques or geospatial data processing. Exposure to air quality sensor technologies, environmental sensor datasets, or dispersion modeling. For Quick Response, please fill out this fo rm https://docs.google.com/forms/d/e/1FAIpQLSeBy7r7b48Yrqz4Ap6-2g_O7BuhIjPhcj-5_3ClsRAkYrQtiA/viewform?usp=sharing&ouid=106739769571157586077 Show more Show less

Posted 2 weeks ago

Apply

9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About Markovate. At Markovate, we dont just follow trendswe drive them. We transform businesses through innovative AI and digital solutions that turn vision into reality. Our team harnesses breakthrough technologies to craft bespoke strategies that align seamlessly with our clients' ambitions. From AI Consulting And Gen AI Development To Pioneering AI Agents And Agentic AI, We Empower Our Partners To Lead Their Industries With Forward-thinking Precision And Unmatched Overview We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modelling. Requirements This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault Requirements : 9+ years of experience in data engineering and data architecture. Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness. Must be highly collaborative and team oriented with commitment to Responsibilities : Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze ? silver ? gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and Open Metadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions, Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modelling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great to have: Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Proficiency in SQL and at least one programming language (e.g., Python, it's like to be at Markovate : At Markovate, we thrive on collaboration and embrace every innovative idea. We invest in continuous learning to keep our team ahead in the AI/ML landscape. Transparent communication is keyevery voice at Markovate is valued. Our agile, data-driven approach transforms challenges into opportunities. We offer flexible work arrangements that empower creativity and balance. Recognition is part of our DNAyour achievements drive our success. Markovate is committed to sustainable practices and positive community impact. Our people-first culture means your growth and well-being are central to our mission. Location : hybrid model 2 days onsite. (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

5.0 years

5 - 9 Lacs

Hyderābād

On-site

JLL supports the Whole You, personally and professionally. Our people at JLL are shaping the future of real estate for a better world by combining world class services, advisory and technology to our clients. We are committed to hiring the best, most talented people in our industry; and we support them through professional growth, flexibility, and personalized benefits to manage life in and outside of work. Whether you’ve got deep experience in commercial real estate, skilled trades, and technology, or you’re looking to apply your relevant experience to a new industry, we empower you to shape a brighter way forward so you can thrive professionally and personally. The BMS Engineer is responsible for implementing and maintaining Building Management Systems that control and monitor various building functions such as HVAC, lighting, security, and energy management. This role requires a blend of technical expertise, problem-solving skills, and the ability to work with diverse stakeholders. Required Qualifications and skills: Diploma/Bachelor's degree in Electrical / Mechanical Engineering or related field 5+ years of experience in BMS Operations, Design implementation, and maintenance Proficiency in BMS software platforms (e.g. Schneider Electric, Siemens, Johnson Controls) Strong understanding of HVAC systems and building operations Knowledge of networking protocols (e.g. BACnet, Modbus, LonWorks) Familiarity with energy management principles and sustainability practices Excellent problem-solving and analytical skills Strong communication and interpersonal abilities Ability to work independently and as part of a team Preferred Qualifications: Professional engineering license (P.E.) or relevant industry certifications Experience with integration of IoT devices and cloud-based systems Knowledge of building codes and energy efficiency standards Project management experience Programming skills (e.g., Python, C++, Java) Roles and Responsibilities of BMS Engineer 1. Troubleshoot and resolve issues with BMS 2. Optimize building performance and energy efficiency through BMS tuning 3. Check LL BMS critical parameters & communicate with LL in case parameters go beyond operating threshold 4. Develop and maintain system documentation and operational procedures. Monitor BMS OEM PPM schedule & ensure diligent execution. Monitor SLAs & inform WTSMs in the event of breach. 5. Ensure real time monitoring of Hot / Cold Prism Tickets & resolve on priority. 6. Preparation of Daily / Weekly & Monthly reports comprising of Uptime / Consumption with break up / Temperature trends / Alarms & equipment MTBF 7. Ensure adherence to Incident escalation process & training to Ground staff. 8. Coordination with BMS OEM for ongoing operational issues (Graphics modification/ sensor calibration / controller configuration / Hardware replacement) 9. Supporting annual power down by gracefully shutting down the system & bringing up post completion of the activity. 10. Ensure healthiness of FLS (Panels / Smoke Detectors) & conduct periodic check for drift levels. 11. Provide technical support and training to facility management team 12. Collaborate with other engineering disciplines, WPX Team and project stakeholders and make changes to building environment if so needed. If this job description resonates with you, we encourage you to apply even if you don’t meet all of the requirements below. We’re interested in getting to know you and what you bring to the table! Personalized benefits that support personal well-being and growth: JLL recognizes the impact that the workplace can have on your wellness, so we offer a supportive culture and comprehensive benefits package that prioritizes mental, physical and emotional health. About JLL – We’re JLL—a leading professional services and investment management firm specializing in real estate. We have operations in over 80 countries and a workforce of over 102,000 individuals around the world who help real estate owners, occupiers and investors achieve their business ambitions. As a global Fortune 500 company, we also have an inherent responsibility to drive sustainability and corporate social responsibility. That’s why we’re committed to our purpose to shape the future of real estate for a better world. We’re using the most advanced technology to create rewarding opportunities, amazing spaces and sustainable real estate solutions for our clients, our people, and our communities. Our core values of teamwork, ethics and excellence are also fundamental to everything we do and we’re honored to be recognized with awards for our success by organizations both globally and locally. Creating a diverse and inclusive culture where we all feel welcomed, valued and empowered to achieve our full potential is important to who we are today and where we’re headed in the future. And we know that unique backgrounds, experiences and perspectives help us think bigger, spark innovation and succeed together.

Posted 2 weeks ago

Apply

1.0 years

0 Lacs

Chennai

On-site

Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Software Test Engineering General Summary: We are seeking a Engineer AI System-Level Test Engineer to lead end-to-end testing of Retrieval-Augmented Generation (RAG) AI systems for Hybrid, Edge-AI Inference solutions. This role will focus on designing, developing, and executing comprehensive test strategies for evaluating the reliability, accuracy, usability and scalability of large-scale AI models integrated with external knowledge retrieval systems. The ideal candidate needs to have deep expertise in AI testing methodologies, experience with large language models (LLMs), expertise in building test solutions for AI Inference stacks, RAG, search/retrieval architecture, and a strong background in automation frameworks, performance validation, and building E2E automation architecture. Experience testing large-scale generative AI applications, familiarity with LangChain, LlamaIndex, or other RAG-specific frameworks, and knowledge of adversarial testing techniques for AI robustness are preferred qualifications Key Responsibilities: Test Strategy & Planning Define end-to-end test strategies for RAG, retrieval, generation, response coherence, and knowledge correctness Develop test plans & automation frameworks to validate system performance across real-world scenarios. Hands-on experience in benchmarking and optimizing Deep Learning Models on AI Accelerators/GPUs Implement E2E solutions to integrate Inference systems with customer software workflows Identify and implement metrics to measure retrieval accuracy, LLM response quality Test Automation Build automated pipelines for regression, integration, and adversarial testing of RAG workflows. Validate search relevance, document ranking, and context injection into LLMs using rigorous test cases. Collaborate with ML engineers and data scientists to debug model failures and identify areas for improvement. Conduct scalability and latency tests for retrieval-heavy applications. Analyze failure patterns, drift detection, and robustness against hallucinations and misinformation. Collaboration Work closely with AI research, engineering teams & customer teams to align testing with business requirements. Generate test reports, dashboards, and insights to drive model improvements. Stay up to date with the latest AI testing frameworks, LLM evaluation benchmarks, and retrieval models. Required Qualifications: 1+ years of experience in AI/ML system testing, software quality engineering, or related fields. Bachelor’s or master’s degree in computer science engineering/ data science / AI/ML Hands-on experience with test automation frameworks (e.g., PyTest, Robot Framework, JMeter). Proficiency in Python, SQL, API testing, vector databases (e.g., FAISS, Weaviate, Pinecone) and retrieval pipelines. Experience with ML model validation metrics (e.g., BLEU, ROUGE, MRR, NDCG). Expertise in CI/CD pipelines, cloud platforms (AWS/GCP/Azure), and containerization (Docker, Kubernetes). Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.

Posted 2 weeks ago

Apply

1.0 - 3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Greetings from Synergy Resource Solutions, a leading Recruitment Consultancy. Our client is a Smart Air Quality Monitoring Solutions company offering data-driven environmental solutions for better decision making. Using our sensor-based hardware, we monitor various environmental parameters related to air quality, noise, odour, weather, radiation etc. Designation: - Environmental Data Scientist (Ahmedabad) Location: - Ahmedabad Experience : - 1 - 3 years Work timings: 10:00 am to 6:30 pm (5 days working) Job Description: We are seeking a research-driven Environmental Data Scientist to lead the development of advanced algorithms that enhance the accuracy, reliability, and performance of air quality sensor data. This role goes beyond traditional data science — it focuses on solving real-world challenges in environmental sensing, such as sensor drift, cross-interference, and data anomalies. Key Responsibilities: ● Design and implement algorithms to improve the accuracy, stability, and interpretability of air quality sensor data (e.g., calibration, anomaly detection, cross-interference mitigation, and signal correction) ● Conduct in-depth research on sensor behavior and environmental impact to inform algorithm development ● Collaborate with software and embedded systems teams to integrate these algorithms into cloud or edge-based systems ● Analyze large, complex environmental datasets using Python, R, or similar tools ● Continuously validate algorithm performance using lab and field data; iterate for improvement ● Develop tools and dashboards to visualize sensor behavior and algorithm impact ● Assist in environmental research projects with statistical analysis and data interpretation ● Document algorithm design, testing procedures, and research findings for internal use and knowledge sharing ● Support team members with data-driven insights and code-level contributions as needed ● Assist other team members with writing efficient code and overcoming programming challenges Required Skills & Qualifications ● Bachelor’s or Master’s degree in one of the following fields: Environmental Engineering / Science, Chemical Engineering, Electronics / Instrumentation Engineering, Computer Science / Data Science, Physics / Atmospheric Science (with data or sensing background) ● 1-2 years of hands-on experience working with sensor data or IoT-based environmental monitoring systems ● Strong knowledge of algorithm development, signal processing, and statistical analysis ● Proficiency in Python (pandas, NumPy, scikit-learn, etc.) or R, with experience handling real-world sensor datasets ● Ability to design and deploy models in a cloud or embedded environment. ● Excellent problem-solving and communication skills. ● Passion for environmental sustainability and clean-tech. Preferred Qualifications: ● Familiarity with time-series anomaly detection, sensor fusion, signal noise reduction techniques or geospatial data processing. ● Exposure to air quality sensor technologies, environmental sensor datasets, or dispersion modeling. Benefits: Competitive salary and benefits package Opportunities for professional growth and development A dynamic and collaborative work environment If your profile is matching with the requirement & if you are interested for this job, please share your updated resume with details of your present salary, expected salary & notice period. Show more Show less

Posted 2 weeks ago

Apply

1.5 - 2.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Linkedin logo

Job Summary: We are looking for a highly motivated and analytical Data Scientist / Machine Learning (ML) Engineer / AI Specialist with 1.5 -2 years of experience in Health data analysis, particularly with data sourced from wearable devices such as smartwatches and fitness trackers. The ideal candidate will be proficient in developing data models, analyzing complex datasets, and translating insights into actionable strategies that enhance health-related applications. Key Responsibilities: Develop and implement data models tailored to health data from wearable devices. Stay updated on industry trends and emerging technologies in health data analytics. Ensure data integrity and security throughout the analysis process , correlations relevant to health metrics. Analyze large datasets to extract actionable insights using statistical methods and machine learning techniques. Develop, train, test, and deploy machine learning models for classification, regression, clustering, NLP, recommendation, or computer vision tasks. Collaborate with cross-functional teams including product, engineering, and domain experts to define problems and deliver solutions. Design and build scalable ML pipelines for model development and deployment. Conduct exploratory data analysis (EDA), data wrangling, feature engineering, and model validation. Monitor model performance in production and iterate based on feedback and data drift. Stay up to date with the latest research and trends in machine learning, deep learning, and AI. Document processes, code, and methodologies to ensure reproducibility and collaboration. Required Qualifications: Bachelor's or Master’s degree in Computer Science, Statistics, Mathematics, Engineering, or related field. 1.5-2 years of experience in data analysis, preferably within the health tech sector. Strong knowledge of Python or R and libraries such as NumPy, pandas, scikit-learn, TensorFlow, PyTorch, or XGBoost. Strong experience with data modeling, machine learning algorithms, and statistical analysis. Familiarity with health data privacy regulations (e.g., HIPAA) and data visualization tools (e.g., Tableau, Power BI). Proficiency in SQL and experience working with large-scale data systems (e.g., Spark, Hadoop, BigQuery, Snowflake). Ability to clearly communicate complex technical concepts to both technical and non-technical audiences. Experience with version control tools (e.g., Git) and ML pipeline tools (e.g., MLflow, Airflow, Kubeflow). Experience deploying models in cloud environments (AWS, GCP, Azure). Knowledge of NLP (e.g., Transformers, LLMs), computer vision, or reinforcement learning. Familiarity with MLOps, CI/CD for ML, and model monitoring tools. Experience - 1.5 - 2 years (Only Local Candidates) Location - Mohali Phase 8b Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

🚀 We're Hiring: PropTech Solutions Consultant 📍 Location: Hyderabad | 💼 Full-time | 🏠 Industry: Real Estate + Technology 💰 Salary: ₹6 – ₹10 LPA (Based on experience & expertise) 🔗 Apply Now | 💡 Empowering Real Estate through Innovation About Us: At MKT Praneet Homes (MPH Developers), we're revolutionizing the real estate industry by integrating technology, data, and innovation into every part of our process. We're not just selling properties — we're creating smart, tech-enabled experiences for buyers, sellers, and real estate professionals. As we grow, we’re looking for a PropTech Solutions Consultant who bridges the gap between cutting-edge tech and impactful real estate solutions. What You'll Do: As a PropTech Solutions Consultant, you’ll be the strategic link between IT and marketing teams, helping us design, implement, and scale technology solutions that drive growth, improve customer experience, and simplify operations. 🔹 Key Responsibilities: Analyze and optimize the end-to-end real estate customer journey using digital tools. Recommend, implement, and manage PropTech platforms such as: 🛠 CRM : Salesforce, Zoho CRM, HubSpot CRM 📲 Virtual Tours & Listing Tech : Matterport, MagicBricks Pro, Square Yards 📈 Analytics & Dashboards : Google Analytics, Power BI, Tableau 🔁 Marketing Automation : Mailchimp, ActiveCampaign, MoEngage 🧠 AI Chatbots & Lead Nurturing : Tars, Drift, Intercom 📍 Geo & Mapping Tools : Mappls, Google Maps API 🧰 Real Estate Portals & Syndication Tools : NoBrokerHood, 99acres Partner Tools, Housing.com Pro Collaborate with sales and marketing teams to align digital strategies with revenue goals. Train and support internal teams on tool adoption and performance tracking. Provide data-driven insights to optimize tech-enabled marketing and sales campaigns. Stay current with global PropTech innovations and evaluate tools for future use. What We’re Looking For: 🧠 Skills & Experience: 2–5 years of experience in a tech-enabled marketing , real estate technology , or business consulting role. Familiarity with real estate business processes, including lead generation, site visits, and post-sales engagement. Hands-on experience with at least 3–5 of the tools listed above. Strong communication skills to explain technical concepts to non-technical teams. Bonus: Experience with integration tools (Zapier, Make), CMS platforms (WordPress), or APIs. 💬 Soft Skills That Set You Apart: Tech-Savvy Communicator – You can simplify the complex and build bridges between tech and business. Problem Solver – You don’t just spot issues; you create smart, scalable solutions. Collaborative Mindset – You thrive in cross-functional teams and enjoy working with marketing, sales, and IT alike. Initiative-Driven – You take ownership and act proactively to drive digital transformation. Adaptable Learner – You’re curious, open to feedback, and always ready to learn new tools and trends. Detail-Oriented Thinker – You ensure smooth integrations, clean data, and flawless execution. Why Join Us? 🧩 Be a key player in our digital transformation journey . 🌱 Opportunity to work at the intersection of technology, marketing, and real estate . 💡 Work with cutting-edge PropTech and build solutions that make a real impact. 🎓 Continuous learning, leadership development, and innovation culture. 💰 Competitive salary, performance incentives, and growth roadmap. How to Apply: Send your resume and a short note on why you're excited about PropTech to sales@mphdevelopers.com. Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job description Job Title: MLOps Engineer Company: Aaizel International Technologies Pvt. Ltd. Location: Gurugram Experience Required: 6+ Years Employment Type: Full-Time About Aaizeltech Aaizeltech is a deep-tech company building AI/ML-powered platforms, scalable SaaS applications, and intelligent embedded systems. We are seeking a Senior MLOps Engineer to lead the architecture, deployment, automation, and scaling of infrastructure and ML systems across multiple product lines. Role Overview This role requires strong expertise and hands-on MLOps experience. You will architect and manage cloud infrastructure, CI/CD systems, Kubernetes clusters, and full ML pipelines—from data ingestion to deployment and drift monitoring. Key Responsibilities MLOps Responsibilities: Collaborate with data scientists to operationalize ML workflows. Build complete ML pipelines with Airflow, Kubeflow Pipelines, or Metaflow. Deploy models using KServe, Seldon Core, BentoML, TorchServe, or TF Serving. Package models into Docker containers using Flask or FastAPI or Django for APIs. Automated dataset versioning & model tracking via DVC and MLflow. Setup model registries and ensure reproducibility and audit trails. Implement model monitoring for: (i) Data drift and schema validation (using tools like Evidently AI, Alibi Detect). (ii) Performance metrics (accuracy, precision, recall). (iii) Infrastructure metrics (latency, throughput, memory usage). Implement event-driven retraining workflows triggered by drift alerts or data freshness. Schedule GPU workloads on Kubernetes and manage resource utilization for ML jobs. Design and manage secure, scalable infrastructure using AWS, GCP, or Azure. Build and maintain CI/CD pipelines using Jenkins, GitLab CI, GitHub Actions, or AWS DevOps. Write and manage Infrastructure as Code using Terraform, Pulumi, or CloudFormation. Automated configuration management with Ansible, Chef, or SaltStack. Manage Docker containers and advanced Kubernetes resources (Helm, StatefulSets, CRDs, DaemonSets). Implement robust monitoring and alerting stacks: Prometheus, Grafana, CloudWatch, Datadog, ELK, or Loki. Must-Have Skills Advanced expertise in Linux administration, networking, and shell scripting. Strong knowledge of Docker, Kubernetes, and container security. Hands-on experience with IaC tools like Terraform and configuration management like Ansible. Proficient in cloud-native services: IAM, EC2, EKS/GKE/AKS, S3, VPCs, Load Balancing, Secrets Manager. Mastery of CI/CD tools (e.g., Jenkins, GitLab, GitHub Actions). Familiarity with SaaS architecture, distributed systems, and multi-env deployments. Proficiency in Python for scripting and ML-related deployments. Experience integrating monitoring, alerting, and incident management workflows. Strong understanding of DevSecOps, security scans (e.g., Trivy, SonarQube, Snyk) and secrets management tools (Vault, SOPS). Experience with GPU orchestration and hybrid on-prem + cloud environments. Nice-to-Have Skills Knowledge of GitOps workflows (e.g., ArgoCD, FluxCD). Experience with Vertex AI, SageMaker Pipelines, or Triton Inference Server. Familiarity with Knative, Cloud Run, or serverless ML deployments. Exposure to cost estimation, rightsizing, and usage-based autoscaling. Understanding of ISO 27001, SOC2, or GDPR-compliant ML deployments. Knowledge of RBAC for Kubernetes and ML pipelines. Who You'll Work With AI/ML Engineers, Backend Developers, Frontend Developers, QA Team Product Owners, Project Managers, and external Government or Enterprise Clients How to Apply If you are passionate about embedded systems and excited to work on next-generation technologies, we would love to hear from you. Please send your resume and a cover letter outlining your relevant experience to hr@aaizeltech.com or bhavik@aaizeltech.com or anju@aaizeltech.com (Contact No- 7302201247) Show more Show less

Posted 2 weeks ago

Apply

5.0 years

15 Lacs

Calicut

On-site

Key Responsibilities ● Prompt Design: Craft and continuously improve prompts for OpenAI, Anthropic and other foundation models using few-shot, chain-of-thought, context-tuning and other techniques for text-analytics and reasoning use cases. ● Evals & Experiments – Develop an evals strategy and build automated eval suites (precision, recall, cost, downstream impact) that run in CI and in production. Build and maintain data sets. ● Prompt Library Management – Stand up a versioned prompt repo, integrate context-injection patterns, and automate rollout/rollback. ● Drift & Performance Monitoring – Detect and guard against context or model shifts ● Context Injection: Use Retrieval augmented generation (RAG) and vector search to inject context information to generate contextually accurate and grounded model responses. Use MCP to manage the way context is assembled, updated and passed to LLM Qualifications ● 5+ Years of AI software experience. Experience with pre-LLM AI tech counts. ● Proven success using LLMs for text analytics or reasoning (not chatbots, style transfer, or safety tuning alone). ● Mastery of prompt-engineering techniques (few-shot, CoT, context, etc.) and hands-on with OpenAI / Anthropic APIs. ● Experience with fine-tuning or adapting foundation models via RLHF, instruction tuning, or domain-specific datasets. ● Proven experience using LLMs via APIs or local deployment (including OpenAI, Claude, Llama) ● Experience building evaluation pipelines for production scale implementations using toolkits like AWS bedrock. ● Strong eval chops—comfortable building custom benchmarks or using tools like OpenAI Evals, Braintrust, etc. ● Solid Python, plus the engineering rigor to wire up automated eval pipelines, data viewers, and model-selection logic. ● Strong written and verbal communicator who can explain trade-offs to both engineers and product leaders Job Type: Full-time Pay: Up to ₹1,500,000.00 per year Benefits: Provident Fund Schedule: Morning shift Supplemental Pay: Performance bonus Experience: LLM: 4 years (Preferred) LLMs for text analytics or reasoning: 3 years (Preferred) AI: 2 years (Preferred) Work Location: In person

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Core Purpose @ Tyroo : To create a positive impact in the world by helping businesses scale anywhere, be more successful, compounding wealth and generating resources for communities they serve to experience dignity and respect. Problem Statement: APAC is poised to contribute 30% of global digital ad market while Digital Media businesses struggle to scale it beyond 5% of their global revenue due to lack of effective In-market sales + ad product localisation capabilities. We at Tyroo are building Largest Monetisation Platform of Digital Media in APAC by 2027 Tyroo is preferred APAC Market entry & Expansion Partners for global internet companies looking to grow in Asia. Currently we are partners to large internet companies (Commission Junction, Outbrain, Criteo, Zemanta, TCL, Phillips, Hisense etc.) through exclusive monetisation or technology relationships. Are you ready to join a fast-growing, hyper-focused company? Core Customers: - Digital Media Businesses we help monetize and be successful i.e. our partners or publishers are our core customers. - Publisher NPS is our north star metric which we aim to enhance and improve every single day. - Key markets for our growth are spread across Greater China, Korea, Japan, ANZ, South East Asia, Middle East(KSA & UAE) and India with active focused teams managed from Singapore HQ. Reporting to: CEO (with dotted line to COO) Function: Cross-functional Strategy, Ops Cadence, Revenue Program Management Level: 3–6 years experience About The Role : As Strategic Ops Lead, you’ll work directly with the CEO and leadership team to drive business rhythm, ensure execution of key strategic initiatives, and translate high-level goals into measurable outcomes across teams. You’ll be the connective tissue between strategy and execution—bringing structure, clarity, and accountability to how Tyroo scales. This role is ideal for someone who has worked in strategy consulting, business operations, or founder’s office roles—and is now looking to build and scale high-leverage commercial initiatives in a fast-moving adtech business. Key Responsibilities Strategic Cadence + Board Alignment - Own the KPI dashboard tied to board-approved AOP (input/output tracking) - Run weekly/monthly business reviews with leadership and BD teams - Prepare strategic content, analysis, and talking points for CEO/Board updates 2. Initative Execution & PMO - Drive cross-functional execution of strategic priorities (e.g. India GTM, Tyroo.TV launch, CJ setup) - Track, escalate, and unblock key projects with clear owners + timelines - Build systems to reduce execution drift and increase internal accountability 3 . Commercial Programs + Rev Ops - Work with revenue teams (CJ, Tyroo, TV) to identify performance gaps and interventions - Partner with Finance/RevOps to turn reporting into insights and actions - Coordinate sales & publisher BD priorities against AMJ and JAS targets 4 . Strategic support to CEO & COO - Be a thought partner in refining GTM models, market entry plans, and strategic bets - Support in modeling, narrative-building, and external communication - Operate like a 1-person SWAT team when needed for high-impact initiatives Ideal Profile - 3–6 years in strategy consulting, business operations, or founder’s office roles - Strong analytical + communication skills; high comfort with dashboards & decks - Structured thinker who loves ambiguity but knows how to bring clarity - Proven ability to run cadences, cross-team alignment, and internal PMO - High trust profile—confidentiality, speed, and CEO-level proximity experience - Bonus: Experience in adtech, affiliate, or B2B SaaS environments Why Would you Apply? - High visibility: Work directly with founder/CEO + exec team - Strategic exposure: Own projects that move revenue, product, and market entry - Career growth: Step into a leadership pipeline role in one of the fastest-growing adtech businesses in Asia Interview Process First Round - HR - Culture Fitment , Skills evaluation Project round - Strategy Note, Problem Statement Modelling Final Round - Founders Round - Interaction/discussion on Project Round Show more Show less

Posted 2 weeks ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job description Job Title: MLOps Engineer Company: Aaizel International Technologies Pvt. Ltd. Location: On Site Experience Required: 6+ Years Employment Type: Full-Time About Aaizeltech Aaizeltech is a deep-tech company building AI/ML-powered platforms, scalable SaaS applications, and intelligent embedded systems. We are seeking a Senior MLOps Engineer to lead the architecture, deployment, automation, and scaling of infrastructure and ML systems across multiple product lines. Role Overview This role requires strong expertise and hands-on MLOps experience. You will architect and manage cloud infrastructure, CI/CD systems, Kubernetes clusters, and full ML pipelines—from data ingestion to deployment and drift monitoring. Key Responsibilities MLOps Responsibilities: Collaborate with data scientists to operationalize ML workflows. Build complete ML pipelines with Airflow, Kubeflow Pipelines, or Metaflow. Deploy models using KServe, Seldon Core, BentoML, TorchServe, or TF Serving. Package models into Docker containers using Flask or FastAPI or Django for APIs. Automated dataset versioning & model tracking via DVC and MLflow. Setup model registries and ensure reproducibility and audit trails. Implement model monitoring for: (i) Data drift and schema validation (using tools like Evidently AI, Alibi Detect). (ii) Performance metrics (accuracy, precision, recall). (iii) Infrastructure metrics (latency, throughput, memory usage). Implement event-driven retraining workflows triggered by drift alerts or data freshness. Schedule GPU workloads on Kubernetes and manage resource utilization for ML jobs. Design and manage secure, scalable infrastructure using AWS, GCP, or Azure. Build and maintain CI/CD pipelines using Jenkins, GitLab CI, GitHub Actions, or AWS DevOps. Write and manage Infrastructure as Code using Terraform, Pulumi, or CloudFormation. Automated configuration management with Ansible, Chef, or SaltStack. Manage Docker containers and advanced Kubernetes resources (Helm, StatefulSets, CRDs, DaemonSets). Implement robust monitoring and alerting stacks: Prometheus, Grafana, CloudWatch, Datadog, ELK, or Loki. Must-Have Skills Advanced expertise in Linux administration, networking, and shell scripting. Strong knowledge of Docker, Kubernetes, and container security. Hands-on experience with IaC tools like Terraform and configuration management like Ansible. Proficient in cloud-native services: IAM, EC2, EKS/GKE/AKS, S3, VPCs, Load Balancing, Secrets Manager. Mastery of CI/CD tools (e.g., Jenkins, GitLab, GitHub Actions). Familiarity with SaaS architecture, distributed systems, and multi-env deployments. Proficiency in Python for scripting and ML-related deployments. Experience integrating monitoring, alerting, and incident management workflows. Strong understanding of DevSecOps, security scans (e.g., Trivy, SonarQube, Snyk) and secrets management tools (Vault, SOPS). Experience with GPU orchestration and hybrid on-prem + cloud environments. Nice-to-Have Skills Knowledge of GitOps workflows (e.g., ArgoCD, FluxCD). Experience with Vertex AI, SageMaker Pipelines, or Triton Inference Server. Familiarity with Knative, Cloud Run, or serverless ML deployments. Exposure to cost estimation, rightsizing, and usage-based autoscaling. Understanding of ISO 27001, SOC2, or GDPR-compliant ML deployments. Knowledge of RBAC for Kubernetes and ML pipelines. Who You'll Work With AI/ML Engineers, Backend Developers, Frontend Developers, QA Team Product Owners, Project Managers, and external Government or Enterprise Clients How to Apply If you are passionate about embedded systems and excited to work on next-generation technologies, we would love to hear from you. Please send your resume and a cover letter outlining your relevant experience to hr@aaizeltech.com or bhavik@aaizeltech.com or anju@aaizeltech.com (Contact No- 7302201247) Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

Job Post :- AI/ML Engineer Experience - 4+ years Location - Remote Key Responsibilities: Design, build, and maintain ML infrastructure on GCP using tools such as Vertex AI, GKE, Dataflow, BigQuery, and Cloud Functions. Develop and automate ML pipelines for model training, validation, deployment, and monitoring using tools like Kubeflow Pipelines, TFX, or Vertex AI Pipelines. Work with Data Scientists to productionize ML models and support experimentation workflows. Implement model monitoring and alerting for drift, performance degradation, and data quality issues. Manage and scale containerized ML workloads using Kubernetes (GKE) and Docker. Set up CI/CD workflows for ML using tools like Cloud Build, Bitbucket, Jenkins, or similar. Ensure proper security, versioning, and compliance across the ML lifecycle. Maintain documentation, artifacts, and reusable templates for reproducibility and auditability. Having GCP MLE Certification is Plus Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Job Title: AI/ML Developer (5 Years Experience) Location : Remote Job Type : Full-time Experience:5 Year Job Summary: We are looking for an experienced AI/ML Developer with at least 5 years of hands-on experience in designing, developing, and deploying machine learning models and AI-driven solutions. The ideal candidate should have strong knowledge of machine learning algorithms, data preprocessing, model evaluation, and experience with production-level ML pipelines. Key Responsibilities Model Development : Design, develop, train, and optimize machine learning and deep learning models for classification, regression, clustering, recommendation, NLP, or computer vision tasks. Data Engineering : Work with data scientists and engineers to preprocess, clean, and transform structured and unstructured datasets. ML Pipelines : Build and maintain scalable ML pipelines using tools such as MLflow, Kubeflow, Airflow, or SageMaker. Deployment : Deploy ML models into production using REST APIs, containers (Docker), or cloud services (AWS/GCP/Azure). Monitoring and Maintenance : Monitor model performance and implement retraining pipelines or drift detection techniques. Collaboration : Work cross-functionally with data scientists, software engineers, and product managers to integrate AI capabilities into applications. Research and Innovation : Stay current with the latest advancements in AI/ML and recommend new techniques or tools where applicable. Required Skills & Qualifications Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field. Minimum 5 years of experience in AI/ML development. Proficiency in Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or LightGBM. Strong understanding of statistics, data structures, and ML/DL algorithms. Experience with cloud platforms (AWS/GCP/Azure) and deploying ML models in production. Experience with CI/CD tools and containerization (Docker, Kubernetes). Familiarity with SQL and NoSQL databases. Excellent problem-solving and communication skills. Preferred Qualifications Experience with NLP frameworks (e.g., Hugging Face Transformers, spaCy, NLTK). Knowledge of MLOps best practices and tools. Experience with version control systems like Git. Familiarity with big data technologies (Spark, Hadoop). Contributions to open-source AI/ML projects or publications in relevant fields. Show more Show less

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Navtech is looking for a AI/ML Engineer to join our growing data science and machine learning team. In this role, you will be responsible for building, deploying, and maintaining machine learning models and pipelines that power intelligent products and data-driven decisions. Working as an AI/ML Engineer at Navtech, you will : Design, develop, and deploy machine learning models for classification, regression, clustering, recommendations, or NLP tasks. Clean, preprocess, and analyze large datasets to extract meaningful insights and features. Work closely with data engineers to develop scalable and reliable data pipelines. Experiment with different algorithms and techniques to improve model performance. Monitor and maintain production ML models, including retraining and model drift detection. Collaborate with software engineers to integrate ML models into applications and services. Document processes, experiments, and decisions for reproducibility and transparency. Stay current with the latest research and trends in machine learning and AI. Who Are We Looking for Exactly ? 2- 4 years of hands-on experience in building and deploying ML models in real-world applications. Strong knowledge of Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or similar. Experience with data preprocessing, feature engineering, and model evaluation techniques. Solid understanding of ML concepts such as supervised and unsupervised learning, overfitting, regularization, etc. Experience working with Jupyter, pandas, NumPy, and visualization libraries like Matplotlib or Seaborn. Familiarity with version control (Git) and basic software engineering practices. You consistently demonstrate strong verbal and written communication skills as well as strong analytical and problem-solving abilities You should have a masters degree /Bachelors (BS) in computer science, Software Engineering, IT, Technology Management or related degrees and throughout education in English medium. Well REALLY Love You If You Have knowledge of cloud platforms (AWS, Azure, GCP) and ML services (SageMaker, Vertex AI, etc.) Have knowledge of GenAI prompting and hosting of LLMs. Have experience with NLP libraries (spaCy, Hugging Face Transformers, NLTK). Have familiarity with MLOps tools and practices (MLflow, DVC, Kubeflow, etc.). Have exposure to deep learning and neural network architectures. Have knowledge of REST APIs and how to serve ML models (e.g., Flask, FastAPI, Docker). Why Navtech? Performance review and Appraisal Twice a year. Competitive pay package with additional bonus & benefits. Work with US, UK & Europe based industry renowned clients for exponential technical growth. Medical Insurance cover for self & immediate family. Work with a culturally diverse team from different us : Navtech is a premier IT software and Services provider. Navtechs mission is to increase public cloud adoption and build cloud-first solutions that become trendsetting platforms of the future. We have been recognized as the Best Cloud Service Provider at GoodFirms for ensuring good results with quality services. Here, we strive to innovate and push technology and service boundaries to provide best-in-class technology solutions to our clients at scale. We deliver to our clients globally from our state-of-the-art design and development centers in the US & Hyderabad. Were a fast-growing company with clients in the United States, UK, and Europe. We are also a certified AWS partner. You will join a team of talented developers, quality engineers, product managers whose mission is to impact above 100 million people across the world with technological services by the year 2030. (ref:hirist.tech) Show more Show less

Posted 2 weeks ago

Apply

9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Job Description : We are seeking a highly experienced and innovative Senior Data Engineer with a strong background in hybrid cloud data integration, pipeline orchestration, and AI-driven data modeling. This role is responsible for designing, building, and optimizing robust, scalable, and production-ready data pipelines across both AWS and Azure platforms, supporting modern data architectures such as CEDM and Data Vault 2.0. Responsibilities Design and develop hybrid ETL/ELT pipelines using AWS Glue and Azure Data Factory (ADF). Process files from AWS S3 and Azure Data Lake Gen2, including schema validation and data profiling. Implement event-based orchestration using AWS Step Functions and Apache Airflow (Astronomer). Develop and maintain bronze → silver → gold data layers using DBT or Coalesce. Create scalable ingestion workflows using Airbyte, AWS Transfer Family, and Rivery. Integrate with metadata and lineage tools like Unity Catalog and OpenMetadata. Build reusable components for schema enforcement, EDA, and alerting (e.g., MS Teams). Work closely with QA teams to integrate test automation and ensure data quality. Collaborate with cross-functional teams including data scientists and business stakeholders to align solutions with AI/ML use cases. Document architectures, pipelines, and workflows for internal stakeholders. Requirements Essential Skills: Job Experience with cloud platforms: AWS (Glue, Step Functions, Lambda, S3, CloudWatch, SNS, Transfer Family) and Azure (ADF, ADLS Gen2, Azure Functions,Event Grid). Skilled in transformation and ELT tools: Databricks (PySpark), DBT, Coalesce, and Python. Proficient in data ingestion using Airbyte, Rivery, SFTP/Excel files, and SQL Server extracts. Strong understanding of data modeling techniques including CEDM, Data Vault 2.0, and Dimensional Modeling. Hands-on experience with orchestration tools such as AWS Step Functions, Airflow (Astronomer), and ADF Triggers. Expertise in monitoring and logging with CloudWatch, AWS Glue Metrics, MS Teams Alerts, and Azure Data Explorer (ADX). Familiar with data governance and lineage tools: Unity Catalog, OpenMetadata, and schema drift detection. Proficient in version control and CI/CD using GitHub, Azure DevOps, CloudFormation, Terraform, and ARM templates. Experienced in data validation and exploratory data analysis with pandas profiling, AWS Glue Data Quality, and Great Expectations. Personal Excellent communication and interpersonal skills, with the ability to engage with teams. Strong problem-solving, decision-making, and conflict-resolution abilities. Proven ability to work independently and lead cross-functional teams. Ability to work in a fast-paced, dynamic environment and handle sensitive issues with discretion and professionalism. Ability to maintain confidentiality and handle sensitive information with attention to detail with discretion. The candidate must have strong work ethics and trustworthiness Must be highly collaborative and team oriented with commitment to excellence. Preferred Skills Job Proficiency in SQL and at least one programming language (e.g., Python, Scala). Experience with cloud data platforms (e.g., AWS, Azure, GCP) and their data and AI services. Knowledge of ETL tools and frameworks (e.g., Apache NiFi, Talend, Informatica). Deep understanding of AI/Generative AI concepts and frameworks (e.g., TensorFlow, PyTorch, Hugging Face, OpenAI APIs). Experience with data modeling, data structures, and database design. Proficiency with data warehousing solutions (e.g., Redshift, BigQuery, Snowflake). Hands-on experience with big data technologies (e.g., Hadoop, Spark, Kafka). Personal Demonstrate proactive thinking Should have strong interpersonal relations, expert business acumen and mentoring skills Have the ability to work under stringent deadlines and demanding client conditions Ability to work under pressure to achieve the multiple daily deadlines for client deliverables with a mature approach Other Relevant Information Bachelor’s in Engineering with specialization in Computer Science or Artificial Intelligence or Information Technology or a related field. 9+ years of experience in data engineering and data architecture. LeewayHertz is an equal opportunity employer and does not discriminate based on race, color, religion, sex, age, disability, national origin, sexual orientation, gender identity, or any other protected status. We encourage a diverse range of applicants. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#6875E2;border-color:#6875E2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> Show more Show less

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Applications Development Senior Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Mathematics & Statistics: Advanced knowledge of probability, statistics and linear algebra. Expertise in statistical modelling, hypothesis testing and experimental design. Machine Learning and AI: 4+ years of hands-on experience with GenAI application with RAG approach, Vector databases, LLM’s. Hands on experience with LLMs (Google Gemini, Open AI, Llama etc.), LangChain, LlamaIndex, LlamaIndex for context-augmented generative AI, and Hugging Face Transformers, Knowledge graph, and Vector Databases. Advanced knowledge of RAG techniques is required, including expertise in hybrid search methods, multi-vector retrieval, Hypothetical Document Embeddings (HyDE), self-querying, query expansion, re-ranking, and relevance filtering etc. Strong Proficiency in Python and deep learning frameworks such as TensorFlow, PyTorch, scikit-learn, Scipy, Pandas and high-level APIs like Keras is essential. Advanced NLP skills, including Named Entity Recognition (NER), Dependency Parsing, Text Classification, and Topic Modeling. In-depth experience with supervised, unsupervised and reinforcement learning algorithms. Proficiency with machine learning libraries and frameworks (e.g. scikit-learn, TensorFlow, PyTorch etc.) Knowledge of deep learning, natural language processing (NLP). Hands-on experience with Feature Engineering, Exploratory Data Analysis. Familiarity and experience with Explainable AI, Model monitoring, Data/ Model Drift. Proficiency in programming languages such as Python. Experience with relational (SQL) and Vector databases. Skilled in Data wrangling, cleaning and preprocessing large datasets. Experience with natural language processing (NLP) and natural language generation (NLG). ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 2 weeks ago

Apply

Exploring Drift Jobs in India

The drift job market in India is rapidly growing, with an increasing demand for professionals skilled in this area. Drift professionals are sought after by companies looking to enhance their customer service and engagement through conversational marketing.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Hyderabad
  5. Pune

Average Salary Range

The average salary range for drift professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 10 lakhs per annum.

Career Path

A typical career path in the drift domain may progress from roles such as Junior Drift Specialist or Drift Consultant to Senior Drift Specialist, Drift Manager, and eventually reaching the position of Drift Director or Head of Drift Operations.

Related Skills

In addition to expertise in drift, professionals in this field are often expected to have skills in customer service, marketing automation, chatbot development, and data analytics.

Interview Questions

  • What is conversational marketing? (basic)
  • How would you handle a customer complaint through a drift chatbot? (medium)
  • Can you explain a scenario where you successfully implemented drift for a client? (medium)
  • What are some common challenges faced in drift implementation and how do you overcome them? (advanced)
  • How do you measure the success of a drift campaign? (medium)
  • Explain the importance of personalization in drift marketing. (medium)
  • How do you ensure compliance with data privacy regulations when using drift? (advanced)
  • What strategies would you implement to increase customer engagement through drift? (medium)
  • Can you provide examples of drift integrations with other marketing tools? (advanced)
  • How do you stay updated on the latest trends and developments in drift technology? (basic)
  • Describe a situation where you had to troubleshoot a technical issue in a drift chatbot. (medium)
  • How do you handle leads generated through drift to ensure conversion? (medium)
  • What are some best practices for setting up drift playbooks? (medium)
  • How do you customize drift for different target audiences? (medium)
  • Explain the difference between drift and traditional marketing methods. (basic)
  • Can you give an example of a successful drift campaign you were involved in? (medium)
  • How do you ensure a seamless transition between drift and human agents in customer interactions? (medium)
  • What metrics do you track to measure the effectiveness of a drift chatbot? (medium)
  • How do you handle negative feedback received through drift interactions? (medium)
  • What are the key components of a successful drift strategy? (medium)
  • How do you handle a high volume of customer inquiries through drift? (medium)
  • Explain the role of AI in drift marketing. (medium)
  • How do you ensure that drift chatbots are providing accurate information to customers? (medium)
  • Describe a situation where you had to customize drift to meet specific client requirements. (advanced)

Closing Remark

As you prepare for a career in drift jobs in India, remember to showcase your expertise, experience, and passion for conversational marketing. Stay updated on industry trends and technologies to stand out in the competitive job market. Best of luck in your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies