About ClearDemand: ClearDemand is a leading provider of AI-driven pricing, promotion, and markdown optimization solutions for retailers and suppliers. Our innovative platform helps businesses enhance profitability, improve demand forecasting, and drive data-driven decision-making. We empower organizations to maximize their revenue potential through intelligent pricing strategies. About the Role We’re looking for a Technical Program Manager who will play a pivotal role in driving projects and opportunities that are business-critical and global in scope. This role requires an individual who can define scalable processes and frameworks, has hands-on project/program management skills, possesses deep technical expertise, and demonstrates strong analytical capabilities. You will partner with the engineering manager of the data collection team to build a robust roadmap and execute against ambitious goals. You will work with the Customer Success team to assimilate the voice of the customers into the roadmap and drive prioritization. You will have to manage scope, resources, and timelines, and use the power of tech to effectively drive the right positive outcome. The ideal candidate is a strategic thinker, capable of managing complex projects from conception through production launch. You will gather business and technical requirements, write detailed specifications, drive project schedules, and set project management standards across the team. Your insights will be crucial in steering data collection strategies, optimizing operational efficiencies, and influencing data-driven decision-making across the organization. Key Responsibilities: Collaborate with engineering, operations, customer success, and program teams to define and execute the data collection roadmap, aligning with business objectives. Drive cross-functional projects that enhance the operational effectiveness of data collection efforts and identify new business opportunities. Develop and implement scalable processes for data extraction, processing, and quality assurance, ensuring data integrity across the data lifecycle. Partner with stakeholders to translate business and system requirements into detailed specifications and project plans. Lead the planning and execution of project goals, tracking progress, managing risks, and evolving strategies based on performance metrics. Cultivate strong customer relationships, manage your stakeholders, influence people, and act as an advocate for data collection needs and priorities. Analyze and present key operational insights to solve complex problems and craft effective, scalable solutions. Drive continuous improvements by measuring performance, identifying inefficiencies, and implementing data-driven enhancements. Communicate effectively with all stakeholders about status, risks, and execution details. Required Skills, Experience, and Background: Adept at identifying issues and providing solutions, with the confidence to lead resolution efforts when necessary. Ability to articulate complex problems and propose well-structured solutions to technical and non-technical stakeholders. Effectively balance personal and team workloads to achieve business goals and meet project deadlines. Empowered to make decisions based on data quality reviews, guiding the team towards optimal outcomes. Previous experience working in an agile/scrum environment, with the flexibility to adapt to changing priorities. Strong programming and technical background, with experience in Unix/Linux environments, distributed systems, and large-scale web applications. Hands-on experience managing big data applications with cloud platforms such as Amazon Web Services (AWS). Deep understanding of the data lifecycle, from data collection and processing to quality assurance and reporting, to support building the team’s roadmap. First-hand experience working with modern web technologies, data scraping, and processing methodologies. Familiarity with automation, analytics tools, and processes for results reporting and data validation. Behavioral Skills Exceptional attention to detail, ensuring the accuracy and integrity of data collection processes. Motivated by delivering high-quality outcomes and improving speed-to-market. Able to think beyond tactical execution to set the strategic direction for data collection initiatives. Works effectively with clients, stakeholders, and management personnel across different geographies. Comfortable navigating multiple demands, shifting priorities, ambiguity, and rapid changes in a fast-paced environment. Capable of handling escalations and challenging situations with calmness and efficiency. Handle discordant views, and thrive on having uncomfortable but positive conversations that’s key to drive the right business outcome for our customers and business. Show more Show less
About Clear Demand: Clear Demand is the leader in AI-driven price and promotion optimization for retailers . Our platform transforms pricing from a challenge to a competitive advantage, helping retailers make smarter, data-backed decisions across the entire pricing lifecycle. By integrating competitive intelligence, pricing rules, and demand modelling , we enable retailers to maximize profit, drive growth, and enhance customer loyalty — all while maintaining pricing compliance and brand integrity. With Clear Demand, retailers stay ahead of the market, automate complex pricing decisions, and unlock new opportunities for growth. Key Responsibilities: People management - Lead a team of software engineers, DS, DE, MLE, in the design, development, and delivery of software solutions. Program management - Strong program leader that has run program management functions to efficiently deliver ML projects to production and manage its operations. Work with Business stakeholders & customers in the Retail Business domain to execute the product vision using the power of AI/ML. Scope out the business requirements by performing necessary data-driven statistical analysis. Set goals and, objectives using proper business metrics and constraints. Conduct exploratory analysis on large volumes of data, understand the statistical shape, and use the right visuals to drive & present the analysis. Analyse and extract relevant information from large amounts of data and derive useful insights on a big-data scale. Create labelling manuals and work with labellers to manage ground truth data and perform feature engineering as needed. Work with software engineering teams, data engineers and ML operations team (Data Labellers, Auditors) to deliver production systems with your deep learning models. Select the right model, train, validate, test, optimise neural net models and keep improving our image and text processing models. Architecturally optimize the deep learning models for efficient inference, reduce latency, improve throughput, reduce memory footprint without sacrificing model accuracy. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. Create and enhance model monitoring system that could measure data distribution shifts, alert when model performance degrades in production. Streamline ML operations by envisioning human in the loop kind of workflows, collect necessary labels/audit information from these workflows/processes, that can feed into improved training and algorithm development process. Maintain multiple versions of the model and ensure the controlled release of models. Manage and mentor junior data scientists, providing guidance on best practices in data science methodologies and project execution. Lead cross-functional teams in the delivery of data-driven projects, ensuring alignment with business goals and timelines. Collaborate with stakeholders to define project objectives, deliverables, and timelines. Qualifications & Experience: MS/PhD from reputed institution with a delivery focus. 5+ years of experience in data science, with a proven track record of delivering impactful data-driven solutions. Delivered AI/ML products/features to production. Seen the complete cycle from Scoping & analysis, Data Ops, Modelling, MLOps, Post deployment analysis. Experts in Supervised and Semi-Supervised learning techniques. Hands-on in ML Frameworks - Pytorch or TensorFlow. Hands-on in Deep learning models. Developed and fine-tuned Transformer based models. ( Input output metric, Sampling technique) Deep understanding of Transformers, GNN models and its related math & internals. Exhibit high coding standards and create production quality code with maximum efficiency. Hands-on in Data analysis & Data engineering skills involving Sqls, PySpark etc. Exposure to ML & Data services on the cloud – AWS, Azure, GCP Understanding internals of computer hardware - CPU, GPU, TPU is a plus. Can leverage the power of hardware accel to optimize the model execution — PyTorch Glow, cuDNN, is a plus Show more Show less
**Key Responsibilities** People management - Lead a team of software engineers, DS, DE, MLE, in the design, development, and delivery of software solutions. Program management - Strong program leader that has run program management functions to efficiently deliver ML projects to production and manage its operations. Work with Business stakeholders & customers in the Retail Business domain to execute the product vision using the power of AI/ML. Scope out the business requirements by performing necessary data-driven statistical analysis. Set goals and, objectives using proper business metrics and constraints. Conduct exploratory analysis on large volumes of data, understand the statistical shape, and use the right visuals to drive & present the analysis. Analyse and extract relevant information from large amounts of data and derive useful insights on a big-data scale. Create labelling manuals and work with labellers to manage ground truth data and perform feature engineering as needed. Work with software engineering teams, data engineers and ML operations team (Data Labellers, Auditors) to deliver production systems with your deep learning models. Select the right model, train, validate, test, optimise neural net models and keep improving our image and text processing models. Architecturally optimize the deep learning models for efficient inference, reduce latency, improve throughput, reduce memory footprint without sacrificing model accuracy. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. Create and enhance model monitoring system that could measure data distribution shifts, alert when model performance degrades in production. Streamline ML operations by envisioning human in the loop kind of workflows, collect necessary labels/audit information from these workflows/processes, that can feed into improved training and algorithm development process. Maintain multiple versions of the model and ensure the controlled release of models. Manage and mentor junior data scientists, providing guidance on best practices in data science methodologies and project execution. Lead cross-functional teams in the delivery of data-driven projects, ensuring alignment with business goals and timelines. Collaborate with stakeholders to define project objectives, deliverables, and timelines. **Skills required: ** MS/PhD from reputed institution with a delivery focus. 5+ years of experience in data science, with a proven track record of delivering impactful data-driven solutions. Delivered AI/ML products/features to production. Seen the complete cycle from Scoping & analysis, Data Ops, Modelling, MLOps, Post deployment analysis. Experts in Supervised and Semi-Supervised learning techniques. Hands-on in ML Frameworks - Pytorch or TensorFlow. Hands-on in Deep learning models. Developed and fine-tuned Transformer based models. ( Input output metric, Sampling technique) Deep understanding of Transformers, GNN models and its related math & internals. Exhibit high coding standards and create production quality code with maximum efficiency. Hands-on in Data analysis & Data engineering skills involving Sqls, PySpark etc. Exposure to ML & Data services on the cloud AWS, Azure, GCP Understanding internals of compute hardware - CPU, GPU, TPU is a plus. Can leverage the power of hardware accel to optimize the model execution — PyTorch Glow, cuDNN, is a plus.
Why This Role Matters: Data is the foundation of our business, and your work will ensure that we continue to deliver high-quality competitive intelligence at scale. Web platforms are constantly evolving, deploying sophisticated anti-bot measures—your job is to stay ahead of them. If you thrive on solving complex technical challenges and enjoy working with real-world data at an immense scale, this role is for you. We seek a Software Development Engineer with expertise in cloud infrastructure, Big Data and web crawling technologies. This role bridges system reliability engineering with scalable data extraction solutions, ensuring our infrastructure remains robust and capable of handling high-volume data collection. You will design resilient systems, optimize automation pipelines, and tackle challenges posed by advanced bot-detection mechanisms. Key Responsibilities: Architect, deploy, and manage scalable cloud environments (AWS/GCP/DO) to support distributed data processing solutions to handle terabyte-scale datasets and billions of records efficiently Automate infrastructure provisioning, monitoring, and disaster recovery using tools like Terraform, Kubernetes, and Prometheus. Optimize CI/CD pipelines to ensure seamless deployment of web scraping workflows and infrastructure updates. Develop and maintain stealthy web scrapers using Puppeteer, Playwright, and headless chromium browsers. Reverse-engineer bot-detection mechanisms (e.g., TLS fingerprinting, CAPTCHA solving) and implement evasion strategies. Monitor system health, troubleshoot bottlenecks, and ensure 99.99% uptime for data collection and processing pipelines. Implement security best practices for cloud infrastructure, including intrusion detection, data encryption, and compliance audits. Partner with data collection, ML and SaaS teams to align infrastructure scalability with evolving data needs Research emerging technologies to stay ahead of anti-bot trends including technologies like Kasada, PerimeterX, Akamai, Cloudflare, and more. Required Skills: 4–6 years of experience in site reliability engineering and cloud infrastructure management . Proficiency in Python, JavaScript for scripting and automation . Hands-on experience with Puppeteer/Playwright, headless browsers, and anti-bot evasion techniques . Knowledge of networking protocols, TLS fingerprinting, and CAPTCHA-solving frameworks . Experience with monitoring and observability tools such as Grafana, Prometheus, Elasticsearch, and familiarity with monitoring and optimizing resource utilization in distributed systems. Experience with data lake architectures and optimizing storage using formats such as Parquet, Avro, or ORC. Strong proficiency in cloud platforms (AWS, GCP, or Azure) and containerization/orchestration (Docker, Kubernetes). Deep understanding of infrastructure-as-code tools (Terraform, Ansible) . Deep experience in designing resilient data systems with a focus on fault tolerance, data replication, and disaster recovery strategies in distributed environments. Experience implementing observability frameworks, distributed tracing, and real-time monitoring tools. Excellent problem-solving abilities, with a collaborative mindset and strong communication skills.