Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 - 5.0 years
4 - 8 Lacs
Hyderābād
On-site
Must-Have Skills & Traits Core Engineering Advanced Python skills with a strong grasp of clean, modular, and maintainable code practices Experience building production-ready backend services using frameworks like FastAPI, Flask, or Django Strong understanding of software architecture , including RESTful API design, modularity, testing, and versioning. Experience working with databases (SQL/NoSQL), caching layers, and background job queues. AI/ML & GenAI Expertise Hands-on experience with machine learning workflows: data preprocessing, model training, evaluation, and deployment Practical experience with LLMs and GenAI tools such as OpenAI APIs, Hugging Face, LangChain, or Transformers Understanding of how to integrate LLMs into applications through prompt engineering, retrieval-augmented generation (RAG), and vector search Comfortable working with unstructured data (text, images) in real-world product environments Bonus: experience with model fine-tuning, evaluation metrics, or vector databases like FAISS, Pinecone, or Weaviate Ownership & Execution Demonstrated ability to take full ownership of features or modules from architecture to delivery Able to work independently in ambiguous situations and drive solutions with minimal guidance Experience collaborating cross-functionally with designers, PMs, and other engineers to deliver user-focused solutions Strong debugging, systems thinking, and decision-making skills with an eye toward scalability and performance Nice-to-Have Skills Experience in startup or fast-paced product environments. 2-5 years of relevant experience. Familiarity with asynchronous programming patterns in Python. Exposure to event-driven architecture and tools such as Kafka, RabbitMQ, or AWS EventBridge Data science exposure: exploratory data analysis (EDA), statistical modeling, or experimentation Built or contributed to agentic systems, ML/AI pipelines, or intelligent automation tools Understanding of MLOps: model deployment, monitoring, drift detection, or retraining pipelines Frontend familiarity (React, Tailwind) for prototyping or contributing to full-stack features
Posted 1 week ago
0 years
0 Lacs
Gurgaon
On-site
Who We Are Boston Consulting Group partners with leaders in business and society to tackle their most important challenges and capture their greatest opportunities. BCG was the pioneer in business strategy when it was founded in 1963. Today, we help clients with total transformation-inspiring complex change, enabling organizations to grow, building competitive advantage, and driving bottom-line impact. To succeed, organizations must blend digital and human capabilities. Our diverse, global teams bring deep industry and functional expertise and a range of perspectives to spark change. BCG delivers solutions through leading-edge management consulting along with technology and design, corporate and digital ventures—and business purpose. We work in a uniquely collaborative model across the firm and throughout all levels of the client organization, generating results that allow our clients to thrive. What You'll Do As a Quality Engineer on the Marketing Datahub Squad, you’ll join a team of passionate professionals dedicated to building and supporting BCG’s next-generation data analytics foundation. Your work will enable personalized customer journeys and empower data-driven decisions by ensuring our analytics platform is stable, scalable, and reliable. The incumbent for this role will help improve and champion data quality and integrity throughout the data lake and other external systems. The candidate must be detail oriented, open-minded and interested in continuous learning, while being curious and unafraid to ask questions. He/She must be willing to innovate and initiate change, discover fresh solutions and present innovative ideas while driving towards increased test automation. He/She must work well in a global team environment and collaborate well with peers and stakeholders Champion data quality across our end-to-end pipeline: from various ingestion sources into Snowflake, through various transformations, to downstream analytics and reporting. Performing integration and regression testing to ensure all system components work successfully together Design, execute and automate test plans for various ETL solutions to ensure each batch and streaming job delivers accurate, timely data. Develop and monitor checks via dbt tests and other tools that surface schema drift, record counts mismatches, null anomalies and other integrity issues. Track and manage defects in JIRA, work collaboratively with Product Owner, Analysts and Data Engineers to prioritize and resolve critical data bugs. Maintain test documentation including test strategies, test cases and run-books, ensuring clarity for both technical and business stakeholders. Continuously improve our CI/CD pipelines (GitHub Actions) by integrating data quality gates and enhancing deployment reliability. What You'll Bring Agile SDLC & Testing Life Cycle: proven track record testing in agile environments with distributed teams. Broad testing expertise: hands-on experience in functional, system, integration and regression testing—applied specifically to data/ETL pipelines. Data platform tools: practical experience with Snowflake, dbt and Fivetran for building, transforming and managing analytic datasets. Cloud Technologies: Familiarity with AWS services (Lambda, Glue jobs and other AWS data stack components) and Azure, including provisioning test environments and validating cloud-native data processes. SQL mastery: ability to author and optimize complex queries to validate transformations, detect discrepancies and generate automated checks. Pipeline validation: testing Data Lake flows (ingest/extract), backend API services for data push/pull, and any data access or visualization layers. Defect Management: using JIRA for logging, triaging and reporting on data defects, and Confluence for maintaining test docs and KPIs. Source control & CI/CD: hands-on with Git for branching and code reviews; experience integrating tests into Jenkins or GitHub Actions. Test Planning & Strategy: help define the scope, estimates, development of test plans, test strategies, and test scripts through the iterations to ensure a quality product. Quality Metrics & KPIs: Tracking and presenting KPIs for testing efforts, such as test coverage, gaps, hotfixes, and defect leakages. Automation: Experience writing end-to-end and/or functional integration automated tests using relevant testing automation frameworks Additional info YOU’RE GOOD AT Data-focused testing: crafting and running complex SQL-driven validations, cross-environment comparisons and sample-based checks in complex pipelines. Automation mindset: Identifying and implementing testing automation solutions for regression, monitoring and efficiency purposes. Collaboration: partnering effectively with Data Engineers, Analytics, BI and Product teams to translate requirements into testable scenarios to ensure a quality product. Being a team player, open, pleasure to work with and positive in a group dynamic, ability to work collaboratively in virtual teams and someone who is self-starter and highly proactive. Agile delivery: adapting to fast-moving sprints, contributing to sprint planning, retrospectives and backlog grooming. Proactivity: spotting gaps in coverage, proposing new test frameworks or tools, and driving adoption across the squad. Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.
Posted 1 week ago
8.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Role Overview We are seeking a QA Manager with strong experience in test automation and a passion for AI . You will lead a team of QA engineers and work closely with cross-functional stakeholders to build robust testing frameworks, introduce intelligent automation, and ensure end-to-end product quality. You’ll also play a key role in shaping how AI can be used to improve QA efficiency and software reliability. Key Responsibilities Own the QA strategy, test planning, and execution for web, mobile, and API-based applications. Lead, mentor, and grow a team of QA engineers and SDETs. Design and implement automation frameworks using modern tools (e.g., Selenium, Cypress, Playwright, Appium). Evaluate and integrate AI-driven QA tools (e.g., Testim, Mabl, Functionize, Diffblue, ChatGPT-based test case generation). Drive continuous integration and delivery (CI/CD) of automated tests across environments. Establish test data strategies using synthetic data generation and AI-based test data tools. Collaborate with product managers, developers, and DevOps teams to define acceptance criteria and promote shift-left testing. Monitor quality metrics and use analytics to improve test coverage, defect detection, and release velocity. Stay abreast of emerging QA trends, especially in AI/ML validation, generative AI testing, and model interpretability QA. Required Skills & Qualifications 8+ years of experience in Quality Assurance, with at least 3 years in a managerial or leadership role. Proven track record in building and scaling test automation for complex systems. Experience with at least one programming language (Python, Java, JavaScript preferred). Hands-on experience with AI-powered QA tools or building AI/ML pipelines with embedded QA. Solid understanding of AI/ML concepts such as model training, inference, data drift, and validation. Strong knowledge of testing practices: unit, integration, functional, performance, regression, security. Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab, etc.). Familiarity with cloud platforms (AWS, Azure, or GCP) and containerized environments (Docker, Kubernetes). Excellent leadership, communication, and stakeholder management skills. Nice to Have Exposure to MLOps or AI model lifecycle QA. Experience in regulatory or enterprise-level compliance QA (e.g., SOC2, GDPR). Contributions to open-source QA projects or AI QA communities. Show more Show less
Posted 1 week ago
15.0 years
0 Lacs
Gurgaon, Haryana, India
Remote
Join the Future of Supply Chain Intelligence — Powered by Agentic AI At Resilinc, we’re not just solving supply chain problems — we’re pioneering the intelligent, autonomous systems that will define its future. Our cutting-edge Agentic AI enables global enterprises to predict disruptions, assess impact instantly, and take real-time action — before operations are even touched. Recognized as a Leader in the 2025 Gartner® Magic Quadrant™ for Supply Chain Risk Management, we are trusted by marquee clients across life sciences, aerospace, high tech, and automotive to protect what matters most — from factory floors to patient care. Our advantage isn’t just technology — it’s the largest supplier-validated data lake in the industry, built over 15 years and constantly enriched by our global intelligence network. It’s how we deliver multi-tier visibility, real-time risk assessment, and adaptive compliance at scale. But the real power behind Resilinc? Our people. We’re a fully remote, mission-driven global team, united by one goal: ensuring vital products reach the people who need them — when and where they need them. Whether it’s helping ensure cancer treatments arrive on time or flagging geopolitical risks before they disrupt critical supply lines, you’ll see your impact every day. If you're passionate about building technology that matters, driven by purpose, and being an agent of change who is ready to shape the next era of self-healing supply chains, we’d love to meet you. Resilinc | Innovation with Purpose. Intelligence with Impact. About The Role At Resilinc, we build intelligent systems that safeguard the global supply chain. As a pioneer in supply chain risk management, we’re pushing the boundaries of resilience with AI-powered platforms. We are building a team of forward-thinking Agent Hackers (AI SDETs) to join our mission. What’s an Agent Hacker? It’s not just a title — it’s a mindset. You’re the kind of engineer who goes beyond traditional QA, probing the limits of autonomous agents, reverse-engineering their behavior, and designing smart, self-evolving test frameworks. In this role, you’ll be at the forefront of testing cutting-edge technologies, including Large Language Models (LLMs), AI agents, and Generative AI systems. You’ll play a critical role in validating the performance, reliability, fairness, and transparency of AI-powered applications—ensuring they meet high standards for both quality and responsible use. If you think like a tester, code like a developer, and break systems like a hacker — Resilinc is your proving ground. What You Will Do Develop and implement QA strategies for AI-powered applications, focusing on accuracy, bias, fairness, robustness, and performance. Design and execute automated and manual test cases to validate AI Agents/LLM models, APIs, and data pipelines and good understanding of data integrity, data models, etc Assess AI models using quality metrics such as precision/recall, and hallucination detection. Test AI models for bias, fairness, explainability (XAI), drift, and adversarial robustness. Validate prompt engineering, fine-tuning techniques, and model-generated responses for accuracy and ethical AI considerations. Service/tool development Conduct scalability, latency, and performance testing for AI-driven applications. Collaborate with data engineers to validate data pipelines, feature engineering processes, and model outputs. Design, develop, and maintain automation scripts using Selenium and Playwright for API and web testing Work closely with cross-functional teams to integrate automation best practices into the development lifecycle Identify, document, and track bugs while conducting detailed regression testing to ensure product quality. What You Will Bring Proven expertise in testing AI models, LLMs, and Generative AI applications, with hands-on experience in AI evaluation metrics and testing tools like Arize, MAIHEM, and LangTest. Strong proficiency in Python for writing test scripts and automating model validation, along with a deep understanding of AI bias detection, adversarial testing, model explainability (XAI), and AI robustness. Demonstrate strong SQL expertise for validating data integrity and backend processes, particularly in PostgreSQL and MySQL. Strong analytical and problem-solving skills with keen attention to detail, along with excellent communication and documentation abilities to convey complex testing processes and results. Why You Will Love It Here Next-Level QA – Go beyond traditional testing to challenge AI agents, LLMs, and GenAI systems with intelligent, self-evolving test strategies Agentic AI Frontier – Be at the forefront of validating autonomous, ethical AI in high-impact applications trusted by global enterprises Full-Stack Test Engineering – Combine Python, SQL, and tools like LangTest, Arize, Selenium & Playwright to test everything from APIs to AI fairness Purpose-Driven Mission – Join a remote-first team that protects critical supply chains — ensuring vital products reach people when they need them most What's in it for you? At Resilinc, we’re fully remote, with plenty of opportunities to connect in person. We provide a culture where ownership, purpose, technical growth and a voice in shaping impactful technology are at our core. Oh, and the perks? Full-stack benefits to keep you thriving. Hit up your talent acquisition contact for a location-specific FAQ. Curious to know more about us? Dive in at www.resilinc.ai If you are a person with a disability needing assistance with the application process please contact HR@resilinc.com. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Working as an AI/ML Engineer at Navtech, you will: * Design, develop, and deploy machine learning models for classification, regression, clustering, recommendations, or NLP tasks. Clean, preprocess, and analyze large datasets to extract meaningful insights and features. Work closely with data engineers to develop scalable and reliable data pipelines. Experiment with different algorithms and techniques to improve model performance. Monitor and maintain production ML models, including retraining and model drift detection. Collaborate with software engineers to integrate ML models into applications and services. Document processes, experiments, and decisions for reproducibility and transparency. Stay current with the latest research and trends in machine learning and AI. Who Are We Looking for Exactly? * 2–4 years of hands-on experience in building and deploying ML models in real-world applications. Strong knowledge of Python and ML libraries such as Scikit-learn, TensorFlow, PyTorch, XGBoost, or similar. Experience with data preprocessing, feature engineering, and model evaluation techniques. Solid understanding of ML concepts such as supervised and unsupervised learning, overfitting, regularization, etc. Experience working with Jupyter, pandas, NumPy, and visualization libraries like Matplotlib or Seaborn. Familiarity with version control (Git) and basic software engineering practices. You consistently demonstrate strong verbal and written communication skills as well as strong analytical and problem-solving abilities You should have a master’s degree /Bachelors (BS) in computer science, Software Engineering, IT, Technology Management or related degrees and throughout education in English medium. We’ll REALLY love you if you: * Have knowledge of cloud platforms (AWS, Azure, GCP) and ML services (SageMaker, Vertex AI, etc.) Have knowledge of GenAI prompting and hosting of LLMs. Have experience with NLP libraries (spaCy, Hugging Face Transformers, NLTK). Have familiarity with MLOps tools and practices (MLflow, DVC, Kubeflow, etc.). Have exposure to deep learning and neural network architectures. Have knowledge of REST APIs and how to serve ML models (e.g., Flask, FastAPI, Docker). Why Navtech? * Performance review and Appraisal Twice a year. Competitive pay package with additional bonus & benefits. Work with US, UK & Europe based industry renowned clients for exponential technical growth. Medical Insurance cover for self & immediate family. Work with a culturally diverse team from different geographies. About Us Navtech is a premier IT software and Services provider. Navtech’s mission is to increase public cloud adoption and build cloud-first solutions that become trendsetting platforms of the future. We have been recognized as the Best Cloud Service Provider at GoodFirms for ensuring good results with quality services. Here, we strive to innovate and push technology and service boundaries to provide best-in-class technology solutions to our clients at scale. We deliver to our clients globally from our state-of-the-art design and development centers in the US & Hyderabad. We’re a fast-growing company with clients in the United States, UK, and Europe. We are also a certified AWS partner. You will join a team of talented developers, quality engineers, product managers whose mission is to impact above 100 million people across the world with technological services by the year 2030. Navtech is looking for a AI/ML Engineer to join our growing data science and machine learning team. In this role, you will be responsible for building, deploying, and maintaining machine learning models and pipelines that power intelligent products and data-driven decisions. Show more Show less
Posted 1 week ago
100.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Xerox Holdings Corporation For more than 100 years, Xerox has continually redefined the workplace experience. Harnessing our leadership position in office and production print technology, we’ve expanded into software and services to sustainably power the hybrid workplace of today and tomorrow. Today, Xerox is continuing its legacy of innovation to deliver client-centric and digitally-driven technology solutions and meet the needs of today’s global, distributed workforce. From the office to industrial environments, our differentiated business and technology offerings and financial services are essential workplace technology solutions that drive success for our clients. At Xerox, we make work, work. Learn more about us at www.xerox.com . Purpose: Collaborating with development and operations teams to design, develop, and implement solutions for continuous integration, delivery, and deployment ML-Models rapidly with confidence. Use managed online endpoints to deploy models across powerful CPU and GPU machines without managing the underlying infrastructure. Package models quickly and ensure high quality at every step using model profiling and validation tools. Optimize model training and deployment pipelines, build for CI/CD to facilitate retraining, and easily fit machine learning into your existing release processes. Use advanced data-drift analysis to improve model performance over time. Build flexible and more secure end-to-end machine learning workflows using MLflow and Azure Machine Learning. Seamlessly scale your existing workloads from local execution to the intelligent cloud and edge. Store your MLflow experiments, run metrics, parameters, and model artifacts in the centralized workspace. Track model version history and lineage for auditability. Set compute quotas on resources and apply policies to ensure adherence to security, privacy, and compliance standards. Use the advanced capabilities to meet governance and control objectives and to promote model transparency and fairness. Facilitate cross-workspace collaboration and MLOps with registries. Host machine learning assets in a central location, making them available to all workspaces in your organization. Promote, share, and discover models, environments, components, and datasets across teams. Reuse pipelines and deploy models created by teams in other workspaces while keeping the lineage and traceability intact. General : Builds knowledge of the organization, processes and customers. Requires knowledge and experience in own discipline; still acquiring higher level knowledge and skills. Receives a moderate level of guidance and direction. Moderate decision-making authority guided by policies, procedures, and business operations protocol. Technical Skills : Will need to be strong on ML pipelines, modern tech stack. Proven experience with MLOPs with Azure and MLFlow etc. Experience with scripting and coding using Python. Working Experience with container technologies (Docker, Kubernetes). Familiarity with standard concepts and technologies used in CI/CD build, deployment pipelines. Experience in relational database (e.g.:- MS SQL Server) & NoSQL Databases (e.g.:- MongoDB) Python and Strong math skills (e.g. statistics). Problem-solving aptitude and Excellent communication and presentation skills. Automating and streamlining infrastructure, build, test, and deployment processes. Monitoring and troubleshooting production issues and providing support to development and operations teams. Managing and maintaining tools and infrastructure for continuous integration and delivery. Managing and maintaining source control systems and branching strategies. Strong knowledge of Linux/Unix administration. Experience with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of networking, security, and storage. Understanding and Practice of AGILE Methodologies. Proficiency and experience in working as part of the Software Development Lifecycle (SDLC) using Code Management & Release Tools (MS DevOps, Github, Team Foundation Server) Above average verbal, written and presentation skills. Show more Show less
Posted 1 week ago
100.0 years
0 Lacs
Kochi, Kerala, India
On-site
About Xerox Holdings Corporation For more than 100 years, Xerox has continually redefined the workplace experience. Harnessing our leadership position in office and production print technology, we’ve expanded into software and services to sustainably power the hybrid workplace of today and tomorrow. Today, Xerox is continuing its legacy of innovation to deliver client-centric and digitally-driven technology solutions and meet the needs of today’s global, distributed workforce. From the office to industrial environments, our differentiated business and technology offerings and financial services are essential workplace technology solutions that drive success for our clients. At Xerox, we make work, work. Learn more about us at www.xerox.com . Designation: MLOps Engineer Location: Kochi, India Experience: 5-8 years Qualification: B. Tech /MCA /BCA Timings: 10 AM to 7 PM (IST) Work Mode: Hybrid Purpose: Collaborating with development and operations teams to design, develop, and implement solutions for continuous integration, delivery, and deployment ML-Models rapidly with confidence. Use managed online endpoints to deploy models across powerful CPU and GPU machines without managing the underlying infrastructure. Package models quickly and ensure high quality at every step using model profiling and validation tools. Optimize model training and deployment pipelines, build for CI/CD to facilitate retraining, and easily fit machine learning into your existing release processes. Use advanced data-drift analysis to improve model performance over time. Build flexible and more secure end-to-end machine learning workflows using MLflow and Azure Machine Learning. Seamlessly scale your existing workloads from local execution to the intelligent cloud and edge. Store your MLflow experiments, run metrics, parameters, and model artifacts in the centralized workspace. Track model version history and lineage for auditability. Set compute quotas on resources and apply policies to ensure adherence to security, privacy, and compliance standards. Use the advanced capabilities to meet governance and control objectives and to promote model transparency and fairness. Facilitate cross-workspace collaboration and MLOps with registries. Host machine learning assets in a central location, making them available to all workspaces in your organization. Promote, share, and discover models, environments, components, and datasets across teams. Reuse pipelines and deploy models created by teams in other workspaces while keeping the lineage and traceability intact. General: Builds knowledge of the organization, processes and customers. Requires knowledge and experience in own discipline; still acquiring higher level knowledge and skills. Receives a moderate level of guidance and direction. Moderate decision-making authority guided by policies, procedures, and business operations protocol. Technical Skills Will need to be strong on ML pipelines, modern tech stack. Proven experience with MLOPs with Azure and MLFlow etc. Experience with scripting and coding using Python and Shell Scripts. Working Experience with container technologies (Docker, Kubernetes). Familiarity with standard concepts and technologies used in CI/CD build, deployment pipelines. Experience in SQL and Python and Strong math skills (e.g. statistics). Problem-solving aptitude and Excellent communication and presentation skills. Automating and streamlining infrastructure, build, test, and deployment processes. Monitoring and troubleshooting production issues and providing support to development and operations teams. Managing and maintaining tools and infrastructure for continuous integration and delivery. Managing and maintaining source control systems and branching strategies. Strong skills in scripting languages like Python, Bash, or PowerShell. Strong knowledge of Linux/Unix administration. Experience with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of networking, security, and storage. Understanding and Practice of AGILE Methodologies. Proficiency and experience in working as part of the Software Development Lifecycle (SDLC) using Code Management & Release Tools (MS DevOps, Github, Team Foundation Server) Required: Proficiency and experience working with Relational Databases and SQL Scripting (MS SQL Server) Above average verbal, written and presentation skills. Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role: Lead – Marketing Operations, Mar-Tech and Marketing Analytics Location: Gurugram (In-office, 5 days a week) Working Hours: 12:00 PM – 12:00 AM IST (aligned with EST overlap) Overview Leena AI is redefining how enterprises automate and resolve HR and IT queries through Agentic AI. We're seeking a data-driven, systems-savvy leader to run our Marketing Operations, Mar-Tech Stack and Marketing Analytics functions. This role is instrumental in enabling predictable pipeline generation and optimizing every lever of our GTM engine – from lead generation through lead capture, lead scoring, and lead routing, to lead conversion and insights. Our marketing and sales both run on Hubspot. The ideal candidate is a self starter who brings a rare blend of analytical rigor, systems thinking, and process excellence , and will serve as the operational backbone of a fast-scaling marketing organization. Marketing Operations (MOps) Mission: Build a high-precision GTM engine that scales with speed and accuracy. Responsibilities: Own end-to-end campaign operations: Campaign set up, A/B testing, lead capture (digital), lead upload (events), lead scoring, deduplication, routing, UTM governance, and detailed campaign performance tracking and ongoing optimization Partner with SDR, Sales Ops and RevOps to ensure accurate attribution, pipeline tracking, two-way feedback flows, and lifecycle stage transitions. Build and enforce SLAs across inbound workflows – MQL > SQL > Opportunity > Pipeline. Define and optimize lead scoring and grading models Develop standardized playbooks and QA processes for product launches, product rollouts, and global field initiatives. Set up and maintain campaign taxonomy and hierarchy, lead source taxonomy, program naming conventions and campaign hygiene in HubSpot. Mar-Tech Stack & Automation Mission: Deploy the most efficient, interoperable marketing technology stack in B2B SaaS. Responsibilities: Follow B2B SaaS best practices and layout a Mar-Tech architecture for the company for the coming couple of years. Update the architecture as Mar-Tech technologies and tools keep evolving Build and manage a Mar-Tech roadmap in alignment with growth and sales priorities. Lead rapid, cross-functional efforts to define business needs. Then own selection criteria and scoring, fast selection processes,, integration, and optimization of core platforms: HubSpot, Clearbit, ZoomInfo, Drift, 6sense, Segment, etc. Design and manage scalable workflows for campaign automation, nurture, retargeting, and enrichment. Serve as the technical lead for data syncs, API workflows, and tool interoperability across GTM systems. Conduct regular stack audits for performance, redundancy, and compliance. Lead the process to sunset/downscale technologies that are no longer needed/viable Drive experimentation through A/B tools, landing page builders, and personalization platforms. Marketing Analytics & Insights Mission: Be the single source of truth for go-to-market (GTM)performance and funnel diagnostics. Responsibilities: Connect with the day-to-day realities of our rapidly growing business to define analytics that would inform better business decisions, and get buy-in and ongoing use Define and track KPIs across acquisition, engagement, conversion, and velocity by segment and geo. Build dashboards and reports for channel performance, CAC, MQL-to-Close, funnel conversion, and ROI. Partner with Finance and RevOps for budget pacing, forecast accuracy, and marketing spend efficiency. Provide analytics support to product marketing, growth, events, and partnerships to enable insight-led decisions. Run lead scoring and attribution modeling and scenario analysis to guide investment across campaigns and markets. Lead monthly and quarterly business reviews, surfacing insights and recommending pivots. Qualifications 6–10 years of experience in marketing operations and analytics roles in a B2B SaaS company. Proven track record of supporting $10M–$100M ARR growth through operational excellence. Deep hands-on experience with HubSpot across marketing automation, workflows, segmentation, and reporting. Strong understanding of GTM funnels, pipeline metrics, attribution models, and lifecycle marketing. Excellent cross-functional collaborator with Sales, SDR, Product Marketing, and Growth teams. An initiative taker, “thinker and doer”, who’s highly structured, detail-oriented, and hands-on problem solver and executor. Bonus: You’re a certified HubSpot whiz or power user with automation and CRM workflows mastery. 🎯 Success = GTM Growth Enablement This role is central to Leena AI’s next stage of growth. Your success will be measured by: Operational efficiency, stability, and reliability Acceleration in MQL > Opportunity conversion rates Improvements in pipeline velocity Optimized CAC and campaign ROI Scalable systems and data-driven decision making across the GTM engine Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About this opportunity: This position plays a crucial role in the development of Python-based solutions, their deployment within a Kubernetes-based environment, and ensuring the smooth data flow for our machine learning and data science initiatives. The ideal candidate will possess a strong foundation in Python programming, hands-on experience with ElasticSearch, Logstash, and Kibana (ELK), a solid grasp of fundamental Spark concepts, and familiarity with visualization tools such as Grafana and Kibana. Furthermore, a background in ML Ops and expertise in both machine learning model development and deployment will be highly advantageous. What you will do: Python Development: Write clean, efficient, and maintainable Python code to support data engineering tasks, including data collection, transformation, and integration with machine learning models. Data Pipeline Development: Design, develop, and maintain robust data pipelines that efficiently gather, process, and transform data from various sources into a format suitable for machine learning and data science tasks using ELK stack, Python and other leading technologies. Spark Knowledge: Apply basic Spark concepts for distributed data processing when necessary, optimizing data workflows for performance and scalability. ELK Integration: Utilize ElasticSearch, Logstash, and Kibana (ELK) for data management, data indexing, and real-time data visualization. Knowledge of OpenSearch and related stack would be beneficial. Grafana and Kibana: Create and manage dashboards and visualizations using Grafana and Kibana to provide real-time insights into data and system performance. Kubernetes Deployment: Deploy data engineering solutions and machine learning models to a Kubernetes-based environment, ensuring security, scalability, reliability, and high availability. What you will Bring: Machine Learning Model Development: Collaborate with data scientists to develop and implement machine learning models, ensuring they meet performance and accuracy requirements. Model Deployment and Monitoring: Deploy machine learning models and implement monitoring solutions to track model performance, drift, and health. Data Quality and Governance: Implement data quality checks and data governance practices to ensure data accuracy, consistency, and compliance with data privacy regulations. MLOps (Added Advantage): Contribute to the implementation of MLOps practices, including model deployment, monitoring, and automation of machine learning workflows. Documentation: Maintain clear and comprehensive documentation for data engineering processes, ELK configurations, machine learning models, visualizations, and deployments. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Bangalore Req ID: 766745 Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Qualcomm India Private Limited Job Area Information Technology Group, Information Technology Group > IT Data Engineer General Summary Developer will play an integral role in the PTEIT Machine Learning Data Engineering team. Design, develop and support data pipelines in a hybrid cloud environment to enable advanced analytics. Design, develop and support CI/CD of data pipelines and services. 5+ years of experience with Python or equivalent programming using OOPS, Data Structures and Algorithms Develop new services in AWS using server-less and container-based services. 3+ years of hands-on experience with AWS Suite of services (EC2, IAM, S3, CDK, Glue, Athena, Lambda, RedShift, Snowflake, RDS) 3+ years of expertise in scheduling data flows using Apache Airflow 3+ years of strong data modelling (Functional, Logical and Physical) and data architecture experience in Data Lake and/or Data Warehouse 3+ years of experience with SQL databases 3+ years of experience with CI/CD and DevOps using Jenkins 3+ years of experience with Event driven architecture specially on Change Data Capture 3+ years of Experience in Apache Spark, SQL, Redshift (or) Big Query (or) Snowflake, Databricks Deep understanding building the efficient data pipelines with data observability, data quality, schema drift, alerting and monitoring. Good understanding of the Data Catalogs, Data Governance, Compliance, Security, Data sharing Experience in building the reusable services across the data processing systems. Should have the ability to work and contribute beyond defined responsibilities Excellent communication and inter-personal skills with deep problem-solving skills. Minimum Qualifications 3+ years of IT-related work experience with a Bachelor's degree in Computer Engineering, Computer Science, Information Systems or a related field. OR 5+ years of IT-related work experience without a Bachelor’s degree. 2+ years of any combination of academic or work experience with programming (e.g., Java, Python). 1+ year of any combination of academic or work experience with SQL or NoSQL Databases. 1+ year of any combination of academic or work experience with Data Structures and algorithms. 5 years of Industry experience and minimum 3 years experience in Data Engineering development with highly reputed organizations Proficiency in Python and AWS Excellent problem-solving skills Deep understanding of data structures and algorithms Proven experience in building cloud native software preferably with AWS suit of services Proven experience in design and develop data models using RDBMS (Oracle, MySQL, etc.) Desirable Exposure or experience in other cloud platforms (Azure and GCP) Experience working on internals of large-scale distributed systems and databases such as Hadoop, Spark Working experience on Data Lakehouse platforms (One House, Databricks Lakehouse) Working experience on Data Lakehouse File Formats (Delta Lake, Iceberg, Hudi) Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3074716 Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
JOB DESCRIPTION: Role: DevOps/ Site Reliability Engineer* Location: Pune Experience: 7+ Years Duration: 6 Months (Possible Extension) Shift Timing: 11:30 AM – 9:30 PM IST *About the Role* We are looking for a highly skilled and experienced DevOps / Site Reliability Engineer to join on a contract basis. The ideal candidate will be hands-on with Kubernetes (preferably GKE), Infrastructure as Code (Terraform/Helm), and cloud-based deployment pipelines. This role demands deep system understanding, proactive monitoring, and infrastructure optimization skills. *Key Responsibilities* Design and implement resilient deployment strategies (Blue-Green, Canary, GitOps). Configure and maintain observability tools (logs, metrics, traces, alerts). Optimize backend service performance through code and infra reviews (Node.js, Django, Go, Java). Tune and troubleshoot GKE workloads, HPA configs, ingress setups, and node pools. Build and manage Terraform modules for infrastructure (VPC, CloudSQL, Pub/Sub, Secrets). Lead or participate in incident response and root cause analysis using logs, traces, and dashboards. Reduce configuration drift and standardize secrets, tagging, and infra consistency across environments. Collaborate with engineering teams to enhance CI/CD pipelines and rollout practices. *Required Skills & Experience* 5–10 years in DevOps, SRE, Platform, or Backend Infrastructure roles. Strong coding/scripting skills and ability to review production-grade backend code. Hands-on experience with Kubernetes in production, preferably on GKE. Proficient in Terraform, Helm, GitHub Actions, and GitOps tools (ArgoCD or Flux). Deep knowledge of Cloud architecture (IAM, VPCs, Workload Identity, CloudSQL, Secret Management). Systems thinking — understands failure domains, cascading issues, timeout limits, and recovery strategies. Strong communication and documentation skills — capable of driving improvements through PRs and design reviews. *Tech Stack & Tools* Cloud & Orchestration: GKE, Kubernetes IaC & CI/CD: Terraform, Helm, GitHub Actions, ArgoCD/Flux Monitoring & Alerting: Datadog, PagerDuty Databases & Networking: CloudSQL, Cloudflare Security & Access Control: Secret Management, IAM If interested, share your resume on aditya.dhumal@leanitcorp.com Show more Show less
Posted 1 week ago
15.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Join the Future of Supply Chain Intelligence — Powered by Agentic AI At Resilinc, we’re not just solving supply chain problems — we’re pioneering the intelligent, autonomous systems that will define its future. Our cutting-edge Agentic AI enables global enterprises to predict disruptions, assess impact instantly, and take real-time action — before operations are even touched. Recognized as a Leader in the 2025 Gartner® Magic Quadrant™ for Supply Chain Risk Management, we are trusted by marquee clients across life sciences, aerospace, high tech, and automotive to protect what matters most — from factory floors to patient care. Our advantage isn’t just technology — it’s the largest supplier-validated data lake in the industry, built over 15 years and constantly enriched by our global intelligence network. It’s how we deliver multi-tier visibility, real-time risk assessment, and adaptive compliance at scale. But the real power behind Resilinc? Our people. We’re a fully remote, mission-driven global team, united by one goal: ensuring vital products reach the people who need them — when and where they need them. Whether it’s helping ensure cancer treatments arrive on time or flagging geopolitical risks before they disrupt critical supply lines, you’ll see your impact every day. If you're passionate about building technology that matters, driven by purpose, and being an agent of change who is ready to shape the next era of self-healing supply chains, we’d love to meet you. Resilinc | Innovation with Purpose. Intelligence with Impact. About The Role At Resilinc, we build intelligent systems that safeguard the global supply chain. As a pioneer in supply chain risk management, we’re pushing the boundaries of resilience with AI-powered platforms. We are building a team of forward-thinking Agent Hackers (AI SDETs) to join our mission. What’s an Agent Hacker? It’s not just a title — it’s a mindset. You’re the kind of engineer who goes beyond traditional QA, probing the limits of autonomous agents, reverse-engineering their behavior, and designing smart, self-evolving test frameworks. In this role, you’ll be at the forefront of testing cutting-edge technologies, including Large Language Models (LLMs), AI agents, and Generative AI systems. You’ll play a critical role in validating the performance, reliability, fairness, and transparency of AI-powered applications—ensuring they meet high standards for both quality and responsible use. If you think like a tester, code like a developer, and break systems like a hacker — Resilinc is your proving ground. What You Will Do Develop and implement QA strategies for AI-powered applications, focusing on accuracy, bias, fairness, robustness, and performance. Design and execute automated and manual test cases to validate AI Agents/LLM models, APIs, and data pipelines and good understanding of data integrity, data models, etc Assess AI models using quality metrics such as precision/recall, and hallucination detection. Test AI models for bias, fairness, explainability (XAI), drift, and adversarial robustness. Validate prompt engineering, fine-tuning techniques, and model-generated responses for accuracy and ethical AI considerations. Service/tool development Conduct scalability, latency, and performance testing for AI-driven applications. Collaborate with data engineers to validate data pipelines, feature engineering processes, and model outputs. Design, develop, and maintain automation scripts using Selenium and Playwright for API and web testing Work closely with cross-functional teams to integrate automation best practices into the development lifecycle Identify, document, and track bugs while conducting detailed regression testing to ensure product quality. What You Will Bring Proven expertise in testing AI models, LLMs, and Generative AI applications, with hands-on experience in AI evaluation metrics and testing tools like Arize, MAIHEM, and LangTest. Strong proficiency in Python for writing test scripts and automating model validation, along with a deep understanding of AI bias detection, adversarial testing, model explainability (XAI), and AI robustness. Demonstrate strong SQL expertise for validating data integrity and backend processes, particularly in PostgreSQL and MySQL. Strong analytical and problem-solving skills with keen attention to detail, along with excellent communication and documentation abilities to convey complex testing processes and results. Why You Will Love It Here 🧠 Next-Level QA – Go beyond traditional testing to challenge AI agents, LLMs, and GenAI systems with intelligent, self-evolving test strategies 🤖 Agentic AI Frontier – Be at the forefront of validating autonomous, ethical AI in high-impact applications trusted by global enterprises 🛠️ Full-Stack Test Engineering – Combine Python, SQL, and tools like LangTest, Arize, Selenium & Playwright to test everything from APIs to AI fairness 🌍 Purpose-Driven Mission – Join a remote-first team that protects critical supply chains — ensuring vital products reach people when they need them most What's in it for you? At Resilinc, we’re fully remote, with plenty of opportunities to connect in person. We provide a culture where ownership, purpose, technical growth and a voice in shaping impactful technology are at our core. Oh, and the perks? Full-stack benefits to keep you thriving. Hit up your talent acquisition contact for a location-specific FAQ. Curious to know more about us? Dive in at www.resilinc.ai If you are a person with a disability needing assistance with the application process please contact HR@resilinc.com. Show more Show less
Posted 1 week ago
15.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Join the Future of Supply Chain Intelligence — Powered by Agentic AI At Resilinc, we’re not just solving supply chain problems — we’re pioneering the intelligent, autonomous systems that will define its future. Our cutting-edge Agentic AI enables global enterprises to predict disruptions, assess impact instantly, and take real-time action — before operations are even touched. Recognized as a Leader in the 2025 Gartner® Magic Quadrant™ for Supply Chain Risk Management, we are trusted by marquee clients across life sciences, aerospace, high tech, and automotive to protect what matters most — from factory floors to patient care. Our advantage isn’t just technology — it’s the largest supplier-validated data lake in the industry, built over 15 years and constantly enriched by our global intelligence network. It’s how we deliver multi-tier visibility, real-time risk assessment, and adaptive compliance at scale. But the real power behind Resilinc? Our people. We’re a fully remote, mission-driven global team, united by one goal: ensuring vital products reach the people who need them — when and where they need them. Whether it’s helping ensure cancer treatments arrive on time or flagging geopolitical risks before they disrupt critical supply lines, you’ll see your impact every day. If you're passionate about building technology that matters, driven by purpose, and being an agent of change who is ready to shape the next era of self-healing supply chains, we’d love to meet you. Resilinc | Innovation with Purpose. Intelligence with Impact. About The Role At Resilinc, we build intelligent systems that safeguard the global supply chain. As a pioneer in supply chain risk management, we’re pushing the boundaries of resilience with AI-powered platforms. We are building a team of forward-thinking Agent Hackers (AI SDETs) to join our mission. What’s an Agent Hacker? It’s not just a title — it’s a mindset. You’re the kind of engineer who goes beyond traditional QA, probing the limits of autonomous agents, reverse-engineering their behavior, and designing smart, self-evolving test frameworks. In this role, you’ll be at the forefront of testing cutting-edge technologies, including Large Language Models (LLMs), AI agents, and Generative AI systems. You’ll play a critical role in validating the performance, reliability, fairness, and transparency of AI-powered applications—ensuring they meet high standards for both quality and responsible use. If you think like a tester, code like a developer, and break systems like a hacker — Resilinc is your proving ground. What You Will Do Develop and implement QA strategies for AI-powered applications, focusing on accuracy, bias, fairness, robustness, and performance. Design and execute automated and manual test cases to validate AI Agents/LLM models, APIs, and data pipelines and good understanding of data integrity, data models, etc Assess AI models using quality metrics such as precision/recall, and hallucination detection. Test AI models for bias, fairness, explainability (XAI), drift, and adversarial robustness. Validate prompt engineering, fine-tuning techniques, and model-generated responses for accuracy and ethical AI considerations. Service/tool development Conduct scalability, latency, and performance testing for AI-driven applications. Collaborate with data engineers to validate data pipelines, feature engineering processes, and model outputs. Design, develop, and maintain automation scripts using Selenium and Playwright for API and web testing Work closely with cross-functional teams to integrate automation best practices into the development lifecycle Identify, document, and track bugs while conducting detailed regression testing to ensure product quality. What You Will Bring Proven expertise in testing AI models, LLMs, and Generative AI applications, with hands-on experience in AI evaluation metrics and testing tools like Arize, MAIHEM, and LangTest. Strong proficiency in Python for writing test scripts and automating model validation, along with a deep understanding of AI bias detection, adversarial testing, model explainability (XAI), and AI robustness. Demonstrate strong SQL expertise for validating data integrity and backend processes, particularly in PostgreSQL and MySQL. Strong analytical and problem-solving skills with keen attention to detail, along with excellent communication and documentation abilities to convey complex testing processes and results. Why You Will Love It Here 🧠 Next-Level QA – Go beyond traditional testing to challenge AI agents, LLMs, and GenAI systems with intelligent, self-evolving test strategies 🤖 Agentic AI Frontier – Be at the forefront of validating autonomous, ethical AI in high-impact applications trusted by global enterprises 🛠️ Full-Stack Test Engineering – Combine Python, SQL, and tools like LangTest, Arize, Selenium & Playwright to test everything from APIs to AI fairness 🌍 Purpose-Driven Mission – Join a remote-first team that protects critical supply chains — ensuring vital products reach people when they need them most What's in it for you? At Resilinc, we’re fully remote, with plenty of opportunities to connect in person. We provide a culture where ownership, purpose, technical growth and a voice in shaping impactful technology are at our core. Oh, and the perks? Full-stack benefits to keep you thriving. Hit up your talent acquisition contact for a location-specific FAQ. Curious to know more about us? Dive in at www.resilinc.ai If you are a person with a disability needing assistance with the application process please contact HR@resilinc.com. Show more Show less
Posted 1 week ago
100.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Xerox Holdings Corporation For more than 100 years, Xerox has continually redefined the workplace experience. Harnessing our leadership position in office and production print technology, we’ve expanded into software and services to sustainably power the hybrid workplace of today and tomorrow. Today, Xerox is continuing its legacy of innovation to deliver client-centric and digitally-driven technology solutions and meet the needs of today’s global, distributed workforce. From the office to industrial environments, our differentiated business and technology offerings and financial services are essential workplace technology solutions that drive success for our clients. At Xerox, we make work, work. Learn more about us at www.xerox.com . Purpose: Collaborating with development and operations teams to design, develop, and implement solutions for continuous integration, delivery, and deployment ML-Models rapidly with confidence. Use managed online endpoints to deploy models across powerful CPU and GPU machines without managing the underlying infrastructure. Package models quickly and ensure high quality at every step using model profiling and validation tools. Optimize model training and deployment pipelines, build for CI/CD to facilitate retraining, and easily fit machine learning into your existing release processes. Use advanced data-drift analysis to improve model performance over time. Build flexible and more secure end-to-end machine learning workflows using MLflow and Azure Machine Learning. Seamlessly scale your existing workloads from local execution to the intelligent cloud and edge. Store your MLflow experiments, run metrics, parameters, and model artifacts in the centralized workspace. Track model version history and lineage for auditability. Set compute quotas on resources and apply policies to ensure adherence to security, privacy, and compliance standards. Use the advanced capabilities to meet governance and control objectives and to promote model transparency and fairness. Facilitate cross-workspace collaboration and MLOps with registries. Host machine learning assets in a central location, making them available to all workspaces in your organization. Promote, share, and discover models, environments, components, and datasets across teams. Reuse pipelines and deploy models created by teams in other workspaces while keeping the lineage and traceability intact. General : Builds knowledge of the organization, processes and customers. Requires knowledge and experience in own discipline; still acquiring higher level knowledge and skills. Receives a moderate level of guidance and direction. Moderate decision-making authority guided by policies, procedures, and business operations protocol. Technical Skills : Will need to be strong on ML pipelines, modern tech stack. Proven experience with MLOPs with Azure and MLFlow etc. Experience with scripting and coding using Python. Working Experience with container technologies (Docker, Kubernetes). Familiarity with standard concepts and technologies used in CI/CD build, deployment pipelines. Experience in relational database (e.g.:- MS SQL Server) & NoSQL Databases (e.g.:- MongoDB) Python and Strong math skills (e.g. statistics). Problem-solving aptitude and Excellent communication and presentation skills. Automating and streamlining infrastructure, build, test, and deployment processes. Monitoring and troubleshooting production issues and providing support to development and operations teams. Managing and maintaining tools and infrastructure for continuous integration and delivery. Managing and maintaining source control systems and branching strategies. Strong knowledge of Linux/Unix administration. Experience with configuration management tools like Ansible, Puppet, or Chef. Strong understanding of networking, security, and storage. Understanding and Practice of AGILE Methodologies. Proficiency and experience in working as part of the Software Development Lifecycle (SDLC) using Code Management & Release Tools (MS DevOps, Github, Team Foundation Server) Above average verbal, written and presentation skills. Show more Show less
Posted 1 week ago
1.0 - 4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Overview Cvent is a leading meetings, events and hospitality technology provider with more than 4,800 employees and nearly 22,000 customers worldwide. Founded in 1999, the company delivers a comprehensive event marketing and management platform for event professionals and offers software solutions to hotels, special event venues and destinations to help them grow their group/MICE and corporate travel business. The DNA of Cvent is our people, and our culture has an emphasis on fostering intrapreneurship --a system that encourages Cventers to think and act like individual entrepreneurs and empowers them to act, embrace risk, and make decisions as if they had founded the company themselves. We foster an environment that promotes agility, which means we don’t have the luxury to wait for perfection. At Cvent, we value the diverse perspectives that each individual brings. Whether working with a team of colleagues or with clients, we ensure that we foster a culture that celebrates differences and builds shared connections. About The Role As a key member of our Marketing Technology and Automation team, you will play a crucial role in leveraging technology to automate and elevate our global marketing programs. This position requires you to have experience with marketing technology as you will be an administrator of our marketing automation platform, Marketo. You will work closely with various teams to implement initiatives, support marketing system administration, ensure governance, and analyze performance. Your responsibilities will include developing and executing programs in Marketo to drive demand generation and enhance prospect and customer engagement. You will also support lead nurturing, scoring, dynamic segmentation, and database optimization efforts. Additionally, you will manage integrations with Marketo, Salesforce, and other marketing technologies, while proactively researching and implementing the latest best practices and strategies. Join us in this exciting opportunity to make a significant impact on our marketing automation efforts, drive demand generation, and contribute to the growth and success of Cvent. In This Role, You Will Develop and execute programs in Marketo to drive demand generation and increase prospect and customer engagement. Support essential initiatives like lead nurturing, scoring, dynamic segmentation, and database optimization. Maintain and support integrations to Marketo, Salesforce, and other marketing technologies. Manage marketing automation efforts and processes, proactively researching and implementing the latest best practices, strategies, and industry standards. Design and execute data management programs to bring better alignment between systems. Build and analyze reporting to show technical and automation effectiveness and trends. Here's What You Need 1-4 years of experience using a marketing automation tool (Marketo preferred; Hubspot, Salesforce Marketing Cloud, or Eloqua also welcomed). Understanding of Marketing Automation and demand generation concepts and ability to implement the same using a Marketing Automation platform. Attention to detail, deadlines, and the ability to prioritize and execute multiple tasks. Excellent communication, problem-solving, teamwork, and future-thinking skills. Ability to dig in to understand user requirements and expectations and deliver on them. Fair understanding of CRM (preferably Salesforce) system and setup. Experience with integrated marketing tools like Marketo, Salesforce, Cvent, 6sense, Reachdesk, Drift, Bizible, Vidyard, and more. Experience working in a fast-paced, collaborative environment. Demonstrated ability working with a globally dispersed team. Basic knowledge of HTML. Show more Show less
Posted 1 week ago
1.0 - 2.0 years
0 Lacs
Hyderābād
On-site
Join our applied-ML team to help turn data into product features—recommendation engines, predictive scores, and intelligent dashboards that ship to real users. You’ll prototype quickly, validate with metrics, and productionise models alongside senior ML engineers. Day-to-Day Responsibilities Clean, explore, and validate datasets (Pandas, NumPy, SQL) Build and evaluate ML/DL models (scikit-learn, TensorFlow / PyTorch) Develop reproducible pipelines using notebooks → scripts → Airflow / Kubeflow Participate in feature engineering, hyper-parameter tuning, and model-selection experiments Package and expose models as REST/gRPC endpoints; monitor drift & accuracy in prod Share insights with stakeholders through visualisations and concise reports Must-Have Skills 1–2 years building ML models in Python Solid understanding of supervised learning workflows (train/validate/test, cross-validation, metrics) Practical experience with at least one deep-learning framework (TensorFlow or PyTorch) Strong data-wrangling skills (Pandas, SQL) and basic statistics (A/B testing, hypothesis testing) Version-control discipline (Git) and comfort with Jupyter-based experimentation Good-to-Have Familiarity with MLOps tooling (MLflow, Weights & Biases, Sagemaker) Exposure to cloud data platforms (BigQuery, Snowflake, Redshift) Knowledge of NLP or CV libraries (spaCy, Hugging Face Transformers, OpenCV) Experience containerising ML services with Docker and orchestrating with Kubernetes Basic understanding of data-privacy and responsible-AI principles Job Types: Full-time, Permanent Pay: From ₹19,100.00 per month Benefits: Health insurance Life insurance Paid sick time Paid time off Provident Fund Schedule: Fixed shift Monday to Friday Experience: Junior Machine-Learning Engineer: 1 year (Preferred) Work Location: In person
Posted 1 week ago
3.0 years
0 - 0 Lacs
Coimbatore
Remote
ML Engineer | 3+ years | Remote | Work Timing: Standard IST Job Description: We are looking for a skilled Machine Learning Engineer with hands-on experience deploying models on Google Cloud Platform (GCP) using Vertex AI. This role involves enabling real-time and batch model inferencing based on specific business requirements, with a strong focus on production-grade ML deployments. Key Responsibilities: Deploy machine learning models on GCP using Vertex AI. Design and implement real-time and batch inference pipelines. Monitor model performance, detect drift, and manage lifecycle. Ensure adherence to model governance best practices and support ML-Ops workflows. Collaborate with cross-functional teams to support Credit Risk, Marketing, and Customer Service use cases, especially within the retail banking domain. Develop scalable and maintainable code in Python and SQL. Work with diverse datasets, perform feature engineering, and build, train, and fine-tune advanced predictive models. Contribute to model deployment in the lending space. Required Skills & Experience: Strong expertise in Python and SQL. Proficient with ML libraries and frameworks such as scikit-learn, pandas, NumPy, spaCy, CatBoost, etc. In-depth knowledge of GCP Vertex AI and ML pipeline orchestration. Experience with ML-Ops and model governance. Exposure to use cases in retail banking—Credit Risk, Marketing, and Customer Service. Experience working with structured and unstructured data. Nice to Have: Prior experience deploying models in the lending domain. Understanding of regulatory considerations in financial services. Job Type: Contractual / Temporary Contract length: 6 months Pay: ₹70,000.00 - ₹80,000.00 per month Benefits: Work from home Schedule: Monday to Friday Morning shift UK shift US shift Application Question(s): Are you ready to move Onsite | Bangalore/Pune? Education: Bachelor's (Preferred) Experience: ML Engineer: 3 years (Required)
Posted 1 week ago
0 years
0 Lacs
India
Remote
Job Listing Detail Summary Gainwell is seeking LLM Ops Engineers and ML Ops Engineers to join our growing AI/ML team. This role is responsible for developing, deploying, and maintaining scalable infrastructure and pipelines for Machine Learning (ML) models and Large Language Models (LLMs). You will play a critical role in ensuring smooth model lifecycle management, performance monitoring, version control, and compliance while collaborating closely with Data Scientists, DevOps. Your role in our mission Core LLM Ops Responsibilities: Develop and manage scalable deployment strategies specifically tailored for LLMs (GPT, Llama, Claude, etc.). Optimize LLM inference performance, including model parallelization, quantization, pruning, and fine-tuning pipelines. Integrate prompt management, version control, and retrieval-augmented generation (RAG) pipelines. Manage vector databases, embedding stores, and document stores used in conjunction with LLMs. Monitor hallucination rates, token usage, and overall cost optimization for LLM APIs or on-prem deployments. Continuously monitor models for its performance and ensure alert system in place. Ensure compliance with ethical AI practices, privacy regulations, and responsible AI guidelines in LLM workflows. Core ML Ops Responsibilities: Design, build, and maintain robust CI/CD pipelines for ML model training, validation, deployment, and monitoring. Implement version control, model registry, and reproducibility strategies for ML models. Automate data ingestion, feature engineering, and model retraining workflows. Monitor model performance, drift, and ensure proper alerting systems are in place. Implement security, compliance, and governance protocols for model deployment. Collaborate with Data Scientists to streamline model development and experimentation. What we're looking for Bachelor's/Master’s degree in computer science, Engineering, or related fields. Strong experience with ML Ops tools (Kubeflow, MLflow, TFX, SageMaker, etc.). Experience with LLM-specific tools and frameworks (LangChain,Lang Graph, LlamaIndex, Hugging Face, OpenAI APIs, Vector DBs like Pinecone, FAISS, Weavite, Chroma DB etc.). Solid experience in deploying models in cloud (AWS, Azure, GCP) and on-prem environments. Proficient in containerization (Docker, Kubernetes) and CI/CD practices. Familiarity with monitoring tools like Prometheus, Grafana, and ML observability platforms. Strong coding skills in Python, Bash, and familiarity with infrastructure-as-code tools (Terraform, Helm, etc.).Knowledge of healthcare AI applications and regulatory compliance (HIPAA, CMS) is a plus. Strong skills in Giskard, Deepeval etc. What you should expect in this role Fully Remote Opportunity – Work from anywhere in the India Minimal Travel Required – Occasional travel opportunities (0-10%). Opportunity to Work on Cutting-Edge AI Solutions in a mission-driven healthcare technology environment. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Maersk, the world's largest shipping company, is transforming into an industrial digital giant that enables global trade with its land, sea and port assets. We are the digital and software development organization that builds products in the areas of predictive science, optimization and IoT. This position offers the opportunity to build your engineering career in a data and analytics intensive environment, delivering work that has direct and significant impact on the success of our company. Global data analytics delivers internal apps to grow revenue and optimize costs across Maersk's business units. We practice agile development in teams empowered to deliver products end-to-end, for which data and analytics are crucial assets. This is an extremely exciting time to join a fast paced, growing and dynamic team that solves some of the toughest problems in the industry and builds the future of trade & logistics. We are an open-minded, friendly and supportive group who strive for excellence together A.P. Moller - Maersk maintains a strong focus on career development, and strong team members regularly have broad possibilities to expand their skill set and impact in an environment characterized by change and continuous progress. The team - who are we: We are an ambitious team with the shared passion to use data, machine learning (ML) and engineering excellence to make a difference for our customers. We are a team, not a collection of individuals. We value our diverse backgrounds, our different personalities and strengths & weaknesses. We value trust and passionate debates. We challenge each other and hold each other accountable. We uphold a caring feedback culture to help each other grow, professionally and personally. We are now seeking a new team member who is excited about using experiments at scale and ML-driven personalisation to create a seamless experience for our users, helping them find the products and content they didn’t even know they were looking for, and drive engagement and business value. Our new member - who are you You are driven by curiosity and are passionate about partnering with a diverse range of business and tech colleagues to deeply understand their customers, uncover new opportunities, advise and support them in design, execution and analysis of experiments, or to develop ML solutions for ML-driven personalisation (e.g., supervised or unsupervised) that drive substantial customer and business impact. You will use your expertise in experiment design, data science, causal inference and machine learning to stimulate data-driven innovation. This is an incredibly exciting role with high impact. You are, like us, a team player who cares about your team members, about growing professionally and personally, about helping your teammates grow, and about having fun together. Basic Qualifications: Bachelor’s or master’s degree in computer science, Software Engineering, Data Science, or related field 3–5 years of professional experience in designing, building, and maintaining scalable data pipelines, both in on-premises and cloud (Azure preferred) environments. Strong expertise in working with large datasets from Salesforce, port operations, cargo tracking, and enterprise systems etc. Proficient writing scalable and high-quality SQL queries, Python coding and object-oriented programming, with a solid grasp of data structures and algorithms. Experience in software engineering best practices, including version control (Git), CI/CD pipelines, code reviews, and writing unit/integration tests. Familiarity with containerization and orchestration tools (Docker, Kubernetes) for data workflows and microservices. Hands-on experience with distributed data systems (e.g., Spark, Kafka, Delta Lake, Hadoop). Experience in data modelling, and workflow orchestration tools like Airflow Ability to support ML engineers and data scientists by building production-grade data pipelines Demonstrated experience collaborating with product managers, domain experts, and stakeholders to translate business needs into robust data infrastructure. Strong analytical and problem-solving skills, with the ability to work in a fast-paced, global, and cross-functional environment. Preferred Qualifications: Experience deploying data solutions in enterprise-grade environments, especially in the shipping, logistics, or supply chain domain. Familiarity with Databricks, Azure Data Factory, Azure Synapse, or similar cloud-native data tools. Knowledge of MLOps practices, including model versioning, monitoring, and data drift detection. Experience building or maintaining RESTful APIs for internal ML/data services using FastAPI, Flask, or similar frameworks. Working knowledge of ML concepts, such as supervised learning, model evaluation, and retraining workflows. Understanding of data governance, security, and compliance practices. Passion for clean code, automation, and continuously improving data engineering systems to support machine learning and analytics at scale. Maersk is committed to a diverse and inclusive workplace, and we embrace different styles of thinking. Maersk is an equal opportunities employer and welcomes applicants without regard to race, colour, gender, sex, age, religion, creed, national origin, ancestry, citizenship, marital status, sexual orientation, physical or mental disability, medical condition, pregnancy or parental leave, veteran status, gender identity, genetic information, or any other characteristic protected by applicable law. We will consider qualified applicants with criminal histories in a manner consistent with all legal requirements. We are happy to support your need for any adjustments during the application and hiring process. If you need special assistance or an accommodation to use our website, apply for a position, or to perform a job, please contact us by emailing accommodationrequests@maersk.com. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
India
On-site
We are hiring a, high-agency ML/AI Engineer to architect and deliver cutting-edge AI solutions for our enterprise clients. This isn't just another ML engineering role - you'll be the technical owner driving complex AI projects end-to-end, from ideation through production deployment and ongoing monitoring and improvement. You'll spend your time: 50% building robust, scalable AI systems that solve real business problems 25% researching and prototyping innovative solutions using the latest AI advances 25% collaborating with clients and stakeholders to translate business needs into technical solutions About think bridge think bridge is how growth-stage companies can finally turn into tech disruptors. They get a new way there – with world-class technology strategy, development, maintenance, and data science all in one place. But solving technology problems like these involves a lot more than code. That’s why we encourage think ’ers to spend 80% of their time thinking through solutions and 20% coding them. With an average client tenure of 4+ years, you won’t be hopping from project to project here – unless you want to. So, you really can get to know your clients and understand their challenges on a deeper level. At thinkbridge, you can expand your knowledge during work hours specifically reserved for learning. Or even transition to a completely different role in the organization. It’s all about challenging yourself while you challenge small thinking. think bridge is a place where you can: Think bigger – because you have the time, opportunity, and support it takes to dig deeper and tackle larger issues. Move faster – because you’ll be working with experienced, helpful teams who can guide you through challenges, quickly resolve issues, and show you new ways to get things done. Go further – because you have the opportunity to grow professionally, add new skills, and take on new responsibilities in an organization that takes a long-term view of every relationship. think bridge.. there’s a new way there. Why This Role Is Different True Ownership: You'll be the technical architect making critical design decisions, not just implementing someone else's vision Production Focus: We need someone who's deployed models/systems AND kept them running - monitoring drift, handling failures, improving performance Diverse Projects: From GenAI applications (65%) to classical ML solutions (35%), across Retail, HRTech, Fintech, and Healthcare domains Technical Architecture: Design systems and guide implementation decisions without the overhead of formal people management What is expected of you? As part of the job, you will be required to Architect end-to-end ML/AI solutions that actually work in production Build and maintain production-grade systems with proper monitoring, alerting, and continuous improvement Make strategic technical decisions on approach, tools, and implementation Translate complex AI concepts into business value for clients Set technical direction for project teams through architecture and best practices Stay current with AI research and identify practical applications for client problems If your beliefs resonate with these, you are looking at the right place! Accountability –Finish what you started Communication–Context aware, pro-active, and clean communication Outcome –High throughput Quality –High-Quality work and consistency Ownership –Go Beyond Requirements Must have technical skills Strong Python proficiency with production ML experience Hands-on experience deploying AND maintaining ML systems in production Experience with both GenAI (LLMs, RAG systems) and classical ML techniques Understanding of ML monitoring, drift detection, and model lifecycle management Cloud deployment experience (Azure knowledge helpful; AWS experience highly valued) Containerization and basic MLOps practices Good to have technical skills Experience fine-tuning open-source models to match/beat proprietary models Advanced MLOps (CI/CD for ML, A/B testing, feature stores) Published work (papers, blogs, open-source contributions) Experience with streaming/real-time ML systems What We're Really Looking for Beyond technical skills, we need someone who: Takes initiative and drives projects without waiting for instructions Has actually felt the pain of their own technical decisions in production Can explain "why this approach" to both engineers and business stakeholders Thinks critically about when to use (and when NOT to use) GenAI Has opinions about ML best practices based on real experience Our Flagship Policies and Benefits: Work from anywhere! Flexible work hours All leaves taken are paid leaves Family Insurance Quarterly Collaboration Week Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Mumbai, Maharashtra
On-site
- 10+ years of professional or military experience, including a Bachelor's degree. - 7+ years managing complex, large-scale projects with internal or external customers. - Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. - Skilled in using Deep Learning frameworks (MXNet, Caffe2, TensorFlow, Theano, CNTK, Keras) and ML tools (SparkML, Amazon Machine Learning) to build models for internal customers. AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. Excited by using massive amounts of data to develop Machine Learning (ML) and Deep Learning (DL) models? Want to help the largest global enterprises derive business value through the adoption of Artificial Intelligence (AI)? Eager to learn from many different enterprise’s use cases of AWS ML and DL? Thrilled to be key part of Amazon, who has been investing in Machine Learning for decades, pioneering and shaping the world’s AI technology? At AWS ProServe India LLP (“ProServe India”), we are helping large enterprises build ML and DL models on the AWS Cloud. We are applying predictive technology to large volumes of data and against a wide spectrum of problems. Our Professional Services organization works together with our internal customers to address business needs of AWS customers using AI. AWS Professional Services is a unique consulting team in ProServe India. We pride ourselves on being customer obsessed and highly focused on the AI enablement of our customers. If you have experience with AI, including building ML or DL models, we’d like to have you join our team. You will get to work with an innovative company, with great teammates, and have a lot of fun helping our customers. If you do not live in a market where we have an open Data Scientist position, please feel free to apply. Our Data Scientists can live in any location where we have a Professional Service office. A successful candidate will be a person who enjoys diving deep into data, doing analysis, discovering root causes, and designing long-term solutions. It will be a person who likes to have fun, loves to learn, and wants to innovate in the world of AI. Major responsibilities include: • Understand the internal customer’s business need and guide them to a solution using our AWS AI Services, AWS AI Platforms, AWS AI Frameworks, and AWS AI EC2 Instances . • Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. • Use Deep Learning frameworks like MXNet, Caffe 2, Tensorflow, Theano, CNTK, and Keras to help our internal customers build DL models. • Use SparkML and Amazon Machine Learning (AML) to help our internal customers build ML models. • Work with our Professional Services Big Data consultants to analyze, extract, normalize, and label relevant data. • Work with our Professional Services DevOps consultants to help our internal customers operationalize models after they are built. • Assist internal customers with identifying model drift and retraining models. • Research and implement novel ML and DL approaches, including using FPGA. This role is open for Mumbai/Pune/Bangalore/Chennai/Hyderabad/Delhi/Pune. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. 10+ years of IT platform implementation in a technical and analytical role experience. Experience in consulting, design and implementation of serverless distributed solutions. Experienced in databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) and managing complex, large-scale customer-facing projects. Experienced as a technical specialist in design and architecture, with expertise in cloud-based solutions (AWS or equivalent), systems, networks, and operating systems. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 weeks ago
7.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 7+ years of professional or military experience, including a Bachelor's degree. - 7+ years managing complex, large-scale projects with internal or external customers. - Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. - Skilled in using Deep Learning frameworks (MXNet, Caffe2, TensorFlow, Theano, CNTK, Keras) and ML tools (SparkML, Amazon Machine Learning) to build models for internal customers. AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. Excited by using massive amounts of data to develop Machine Learning (ML) and Deep Learning (DL) models? Want to help the largest global enterprises derive business value through the adoption of Artificial Intelligence (AI)? Eager to learn from many different enterprise’s use cases of AWS ML and DL? Thrilled to be key part of Amazon, who has been investing in Machine Learning for decades, pioneering and shaping the world’s AI technology? At AWS ProServe India LLP (“ProServe India”), we are helping large enterprises build ML and DL models on the AWS Cloud. We are applying predictive technology to large volumes of data and against a wide spectrum of problems. Our Professional Services organization works together with our internal customers to address business needs of AWS customers using AI. AWS Professional Services is a unique consulting team in ProServe India. We pride ourselves on being customer obsessed and highly focused on the AI enablement of our customers. If you have experience with AI, including building ML or DL models, we’d like to have you join our team. You will get to work with an innovative company, with great teammates, and have a lot of fun helping our customers. If you do not live in a market where we have an open Data Scientist position, please feel free to apply. Our Data Scientists can live in any location where we have a Professional Service office. Key job responsibilities A successful candidate will be a person who enjoys diving deep into data, doing analysis, discovering root causes, and designing long-term solutions. It will be a person who likes to have fun, loves to learn, and wants to innovate in the world of AI. Major responsibilities include: • Understand the internal customer’s business need and guide them to a solution using our AWS AI Services, AWS AI Platforms, AWS AI Frameworks, and AWS AI EC2 Instances . • Assist internal customers by being able to deliver a ML / DL project from beginning to end, including understanding the business need, aggregating data, exploring data, building & validating predictive models, and deploying completed models to deliver business impact to the organization. • Use Deep Learning frameworks like MXNet, Caffe 2, Tensorflow, Theano, CNTK, and Keras to help our internal customers build DL models. • Use SparkML and Amazon Machine Learning (AML) to help our internal customers build ML models. • Work with our Professional Services Big Data consultants to analyze, extract, normalize, and label relevant data. • Work with our Professional Services DevOps consultants to help our internal customers operationalize models after they are built. • Assist internal customers with identifying model drift and retraining models. • Research and implement novel ML and DL approaches, including using FPGA. This role is open for Mumbai/Pune/Bangalore/Chennai/Hyderabad/Delhi/Pune. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. 7+ years of IT platform implementation in a technical and analytical role experience. Experience in consulting, design and implementation of serverless distributed solutions. Experienced in databases (SQL, NoSQL, Hadoop, Spark, Kafka, Kinesis) and managing complex, large-scale customer-facing projects. Experienced as a technical specialist in design and architecture, with expertise in cloud-based solutions (AWS or equivalent), systems, networks, and operating systems. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 2 weeks ago
1.0 years
0 Lacs
Rawatsar, Rajasthan, India
Remote
Every second, the internet gets messier. Content floods in from humans and machines alike—some helpful, some harmful, and most of it unstructured. Forums, blogs, knowledge bases, event pages, community threads: these are the lifeblood of digital platforms, but they also carry risk. Left unchecked, they can drift into chaos, compromise brand integrity, or expose users to misinformation and abuse. The scale is too big for humans alone, and AI isn't good enough to do it alone—yet. That's where we come in. Our team is rebuilding content integrity from the ground up by combining human judgment with generative AI. We don't treat AI like a sidekick or a threat. Every moderator on our team works side-by-side with GenAI tools to classify, tag, escalate, and refine content decisions at speed. The edge cases you annotate and the feedback you give train smarter systems, reduce false positives, and make AI moderation meaningfully better with every cycle. This isn't a job where you manually slog through a never-ending moderation queue. It's not an outsourced content cop role. You'll spend your days interacting directly with AI to make decisions, flag patterns, streamline workflows, and make sure the right content sees the light of day. If you're the kind of person who thrives on structured work, enjoys hunting down ambiguity, and finds satisfaction in operational clarity, this job will feel like a control panel for the future of content quality. You'll be joining a team obsessed with platform integrity and operational scale. Your job is to keep the machine running smoothly: managing queues, moderating edge cases, annotating training data, and making feedback loops tighter and faster. If you've used tools like ChatGPT to get real work done—not just writing poems or brainstorming ideas, but actually processing or classifying information—this is your next level. What You Will Be Doing Review and moderate user- and AI-generated content using GenAI tools to enforce platform policies and maintain a safe, high-quality environment Coordinate content workflows across tools and teams, ensuring timely processing, clear tracking, and smooth handoffs Tag edge cases, annotate training data, and provide structured feedback to improve the accuracy and performance of AI moderation systems What You Won’t Be Doing A boring content moderation job focused on manually reviewing of blogpost after blogpost An entry-level admin role with low agency or impact, just checking boxes in a queue AI Content Analyst Key Responsibilities Drive continuous improvement of our AI-human content moderation system by identifying patterns, refining workflows, and providing critical feedback that directly enhances platform integrity and user trust. Basic Requirements At least 1 year of professional work experience Hands-on experience using GenAI tools (e.g., ChatGPT, Claude, Gemini) in a professional, academic, or personal productivity context Strong English writing skills Nice-to-have Requirements Experience with content moderation, trust and safety, or platform policy enforcement Background in data labeling, annotation, or training data preparation Familiarity with workflow management tools and structured feedback systems About IgniteTech If you want to work hard at a company where you can grow and be a part of a dynamic team, join IgniteTech! Through our portfolio of leading enterprise software solutions, we ignite business performance for thousands of customers globally. We’re doing it in an entirely remote workplace that is focused on building teams of top talent and operating in a model that provides challenging opportunities and personal flexibility. A career with IgniteTech is challenging and fast-paced. We are always looking for energetic and enthusiastic employees to join our world-class team. We offer opportunities for personal contribution and promote career development. IgniteTech is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. There is so much to cover for this exciting role, and space here is limited. Hit the Apply button if you found this interesting and want to learn more. We look forward to meeting you! Working with us This is a full-time (40 hours per week), long-term position. The position is immediately available and requires entering into an independent contractor agreement with Crossover as a Contractor of Record. The compensation level for this role is $15 USD/hour, which equates to $30,000 USD/year assuming 40 hours per week and 50 weeks per year. The payment period is weekly. Consult www.crossover.com/help-and-faqs for more details on this topic. Crossover Job Code: LJ-5593-LK-COUNTRY-AIContentAnaly.002 Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Details Job Description: Join Altera, a leader in programmable logic technology, as we strive to become the #1 FPGA company. We are looking for a skilled Jr. Data Scientist to develop and deploy production-grade ML pipelines and infrastructure across the enterprise. This is a highly technical, hands-on role focused on building scalable, secure, and maintainable machine learning solutions within the Azure ecosystem. As a member of our Data & Analytics team, you will work closely with other data scientists, ML specialists, and engineering teams to operationalize ML models using modern tooling such as Azure Machine Learning, Dataiku, and Kubeflow, etc. You’ll drive MLOps practices, automate workflows, and help build a foundation for responsible and reliable AI delivery. Responsibilities Design, build, and maintain automated ML pipelines from data ingestion through model training, validation, deployment, and monitoring using Azure Machine Learning, Kubeflow, and related tools. Deploy and manage machine learning models in production environments using cloud-native technologies like AKS (Azure Kubernetes Service), Azure Functions, and containerized environments. Partner with data scientists to transform experimental models into robust, production-ready systems, ensuring scalability and performance. Drive best practices for model versioning, CI/CD, testing, monitoring, and drift detection using Azure DevOps, Git, and third-party tools. Work with large-scale datasets from enterprise sources using Azure Synapse Analytics, Azure Data Factory, Azure Data Lake, etc. Build integrations with platforms like Dataiku etc. to support collaborative workflows and low-code user interactions while ensuring underlying infrastructure is robust and auditable. Set up monitoring pipelines to track model performance, ensure availability, manage retraining schedules, and respond to production issues. Write clean, modular code with clear documentation, tests, and reusable components for ML workflows. Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, or a related field. 3+ years of hands-on experience developing and deploying machine learning models in production environments. Strong programming skills in Python, with experience in ML libraries such as scikit-learn, TensorFlow, PyTorch, or XGBoost. Proven experience with the Microsoft Azure ecosystem, especially: Azure Machine Learning (AutoML, ML Designer, SDK) Azure Synapse Analytics and Data Factory Azure Data Lake, Azure Databricks Azure OpenAI and Cognitive Services Experience with MLOps frameworks such as Kubeflow, MLflow, or Azure ML pipelines. Familiarity with CI/CD tools like Azure DevOps, GitHub Actions, or Jenkins for model lifecycle automation. Experience working with APIs, batch and real-time data pipelines, and cloud security practices. Why Join Us? Build and scale real-world ML systems on a modern Azure-based platform. Help shape the AI and ML engineering foundation of a forward-looking organization. Work cross-functionally with experts in data science, software engineering, and operations. Enjoy a collaborative, high-impact environment where innovation is valued and supported. Job Type Regular Shift Shift 1 (India) Primary Location: Ecospace 1 Additional Locations: Posting Statement All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Kochi, Kerala, India
On-site
TATUS: 37.5 hours per week, Permanent. SALARY: Competitive and based on experience and qualifications. LOCATION: Kochi, India DUTIES AND RESPONSIBILITIES WILL INCLUDE : Design and implement software systems that embed or integrate AI/ML models Collaborate with data scientists to convert research models into production-grade code Build and maintain pipelines for model training, validation, deployment, and monitoring Optimize model inference for performance, scalability, and responsiveness Develop reusable components, libraries, and APIs for ML workflows Implement robust logging, testing, and CI/CD pipelines for ML-based applications Monitor deployed models for performance drift and help manage retraining cycles REQUIREMENTS Essential requirements include: Bachelor’s or master’s degree in computer science, AI/ML, Data Science, or related field 3+ years of experience in software development, with solid coding skills in Python (and optionally C++) Hands-on experience with machine learning frameworks Strong understanding of data structures, algorithms, and system design Experience building and deploying ML models in production environments Familiarity with ML Ops practices: model packaging, versioning, monitoring, A/B testing Experience with RESTful APIs, microservices, or distributed systems Proficient in Git and collaborative development workflows. Desirable requirements: Experience with cloud platforms Familiarity with data engineering workflows Exposure to deep learning model optimisation tools Understanding of NLP, computer vision, or time-series forecasting THE POSITION IPSA Power (www.ipsa-power.com) develops and maintains IPSA, a power system analysis tool, and other products based on it. IPSA Power is part of TNEI (www.tneigroup.com), an independent specialist energy consultancy providing technical, strategic, planning, and environmental advice to companies and organisations operating within the energy sector. The dedicated software and solutions team that develops IPSA and other tools based on it is based in Manchester and Kochi. We are looking for a software engineer with a strong foundation in AI/ML and solid software development skills, to help us build intelligent, scalable systems that bring real-world machine learning applications to life. You will work closely with data scientists and engineers to develop, deploy, and optimize ML-driven software products. If you are passionate about clean code and deploying ML models into production with high reliability. Why should you apply? Join a world class team in a rapidly growing industry Have a hands-on opportunity to make a real difference in a small company Excellent professional and personal development opportunities Professional membership fees Discretionary annual performance-based bonus 25 days annual leave Additional day off on your birthday! How to apply Please apply using the ‘Apply Now’ form on the Careers Page on our website, and upload your CV and covering letter, demonstrating why you are suitable for the role and any previous experience. Closing date for applications: 20 June 2025 We shall be interviewing suitable candidates on a continuous basis, therefore, if you are planning to apply, we recommend that you do so without delay. Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The drift job market in India is rapidly growing, with an increasing demand for professionals skilled in this area. Drift professionals are sought after by companies looking to enhance their customer service and engagement through conversational marketing.
The average salary range for drift professionals in India varies based on experience levels. Entry-level professionals can expect to earn around INR 4-6 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 10 lakhs per annum.
A typical career path in the drift domain may progress from roles such as Junior Drift Specialist or Drift Consultant to Senior Drift Specialist, Drift Manager, and eventually reaching the position of Drift Director or Head of Drift Operations.
In addition to expertise in drift, professionals in this field are often expected to have skills in customer service, marketing automation, chatbot development, and data analytics.
As you prepare for a career in drift jobs in India, remember to showcase your expertise, experience, and passion for conversational marketing. Stay updated on industry trends and technologies to stand out in the competitive job market. Best of luck in your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2