About ClearDemand: ClearDemand is a leading provider of AI-driven pricing, promotion, and markdown optimization solutions for retailers and suppliers. Our innovative platform helps businesses enhance profitability, improve demand forecasting, and drive data-driven decision-making. We empower organizations to maximize their revenue potential through intelligent pricing strategies. About the Role We’re looking for a Technical Program Manager who will play a pivotal role in driving projects and opportunities that are business-critical and global in scope. This role requires an individual who can define scalable processes and frameworks, has hands-on project/program management skills, possesses deep technical expertise, and demonstrates strong analytical capabilities. You will partner with the engineering manager of the data collection team to build a robust roadmap and execute against ambitious goals. You will work with the Customer Success team to assimilate the voice of the customers into the roadmap and drive prioritization. You will have to manage scope, resources, and timelines, and use the power of tech to effectively drive the right positive outcome. The ideal candidate is a strategic thinker, capable of managing complex projects from conception through production launch. You will gather business and technical requirements, write detailed specifications, drive project schedules, and set project management standards across the team. Your insights will be crucial in steering data collection strategies, optimizing operational efficiencies, and influencing data-driven decision-making across the organization. Key Responsibilities: Collaborate with engineering, operations, customer success, and program teams to define and execute the data collection roadmap, aligning with business objectives. Drive cross-functional projects that enhance the operational effectiveness of data collection efforts and identify new business opportunities. Develop and implement scalable processes for data extraction, processing, and quality assurance, ensuring data integrity across the data lifecycle. Partner with stakeholders to translate business and system requirements into detailed specifications and project plans. Lead the planning and execution of project goals, tracking progress, managing risks, and evolving strategies based on performance metrics. Cultivate strong customer relationships, manage your stakeholders, influence people, and act as an advocate for data collection needs and priorities. Analyze and present key operational insights to solve complex problems and craft effective, scalable solutions. Drive continuous improvements by measuring performance, identifying inefficiencies, and implementing data-driven enhancements. Communicate effectively with all stakeholders about status, risks, and execution details. Required Skills, Experience, and Background: Adept at identifying issues and providing solutions, with the confidence to lead resolution efforts when necessary. Ability to articulate complex problems and propose well-structured solutions to technical and non-technical stakeholders. Effectively balance personal and team workloads to achieve business goals and meet project deadlines. Empowered to make decisions based on data quality reviews, guiding the team towards optimal outcomes. Previous experience working in an agile/scrum environment, with the flexibility to adapt to changing priorities. Strong programming and technical background, with experience in Unix/Linux environments, distributed systems, and large-scale web applications. Hands-on experience managing big data applications with cloud platforms such as Amazon Web Services (AWS). Deep understanding of the data lifecycle, from data collection and processing to quality assurance and reporting, to support building the team’s roadmap. First-hand experience working with modern web technologies, data scraping, and processing methodologies. Familiarity with automation, analytics tools, and processes for results reporting and data validation. Behavioral Skills Exceptional attention to detail, ensuring the accuracy and integrity of data collection processes. Motivated by delivering high-quality outcomes and improving speed-to-market. Able to think beyond tactical execution to set the strategic direction for data collection initiatives. Works effectively with clients, stakeholders, and management personnel across different geographies. Comfortable navigating multiple demands, shifting priorities, ambiguity, and rapid changes in a fast-paced environment. Capable of handling escalations and challenging situations with calmness and efficiency. Handle discordant views, and thrive on having uncomfortable but positive conversations that’s key to drive the right business outcome for our customers and business. Show more Show less
About Clear Demand: Clear Demand is the leader in AI-driven price and promotion optimization for retailers . Our platform transforms pricing from a challenge to a competitive advantage, helping retailers make smarter, data-backed decisions across the entire pricing lifecycle. By integrating competitive intelligence, pricing rules, and demand modelling , we enable retailers to maximize profit, drive growth, and enhance customer loyalty — all while maintaining pricing compliance and brand integrity. With Clear Demand, retailers stay ahead of the market, automate complex pricing decisions, and unlock new opportunities for growth. Key Responsibilities: People management - Lead a team of software engineers, DS, DE, MLE, in the design, development, and delivery of software solutions. Program management - Strong program leader that has run program management functions to efficiently deliver ML projects to production and manage its operations. Work with Business stakeholders & customers in the Retail Business domain to execute the product vision using the power of AI/ML. Scope out the business requirements by performing necessary data-driven statistical analysis. Set goals and, objectives using proper business metrics and constraints. Conduct exploratory analysis on large volumes of data, understand the statistical shape, and use the right visuals to drive & present the analysis. Analyse and extract relevant information from large amounts of data and derive useful insights on a big-data scale. Create labelling manuals and work with labellers to manage ground truth data and perform feature engineering as needed. Work with software engineering teams, data engineers and ML operations team (Data Labellers, Auditors) to deliver production systems with your deep learning models. Select the right model, train, validate, test, optimise neural net models and keep improving our image and text processing models. Architecturally optimize the deep learning models for efficient inference, reduce latency, improve throughput, reduce memory footprint without sacrificing model accuracy. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. Create and enhance model monitoring system that could measure data distribution shifts, alert when model performance degrades in production. Streamline ML operations by envisioning human in the loop kind of workflows, collect necessary labels/audit information from these workflows/processes, that can feed into improved training and algorithm development process. Maintain multiple versions of the model and ensure the controlled release of models. Manage and mentor junior data scientists, providing guidance on best practices in data science methodologies and project execution. Lead cross-functional teams in the delivery of data-driven projects, ensuring alignment with business goals and timelines. Collaborate with stakeholders to define project objectives, deliverables, and timelines. Qualifications & Experience: MS/PhD from reputed institution with a delivery focus. 5+ years of experience in data science, with a proven track record of delivering impactful data-driven solutions. Delivered AI/ML products/features to production. Seen the complete cycle from Scoping & analysis, Data Ops, Modelling, MLOps, Post deployment analysis. Experts in Supervised and Semi-Supervised learning techniques. Hands-on in ML Frameworks - Pytorch or TensorFlow. Hands-on in Deep learning models. Developed and fine-tuned Transformer based models. ( Input output metric, Sampling technique) Deep understanding of Transformers, GNN models and its related math & internals. Exhibit high coding standards and create production quality code with maximum efficiency. Hands-on in Data analysis & Data engineering skills involving Sqls, PySpark etc. Exposure to ML & Data services on the cloud – AWS, Azure, GCP Understanding internals of computer hardware - CPU, GPU, TPU is a plus. Can leverage the power of hardware accel to optimize the model execution — PyTorch Glow, cuDNN, is a plus Show more Show less
**Key Responsibilities** People management - Lead a team of software engineers, DS, DE, MLE, in the design, development, and delivery of software solutions. Program management - Strong program leader that has run program management functions to efficiently deliver ML projects to production and manage its operations. Work with Business stakeholders & customers in the Retail Business domain to execute the product vision using the power of AI/ML. Scope out the business requirements by performing necessary data-driven statistical analysis. Set goals and, objectives using proper business metrics and constraints. Conduct exploratory analysis on large volumes of data, understand the statistical shape, and use the right visuals to drive & present the analysis. Analyse and extract relevant information from large amounts of data and derive useful insights on a big-data scale. Create labelling manuals and work with labellers to manage ground truth data and perform feature engineering as needed. Work with software engineering teams, data engineers and ML operations team (Data Labellers, Auditors) to deliver production systems with your deep learning models. Select the right model, train, validate, test, optimise neural net models and keep improving our image and text processing models. Architecturally optimize the deep learning models for efficient inference, reduce latency, improve throughput, reduce memory footprint without sacrificing model accuracy. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. Create and enhance model monitoring system that could measure data distribution shifts, alert when model performance degrades in production. Streamline ML operations by envisioning human in the loop kind of workflows, collect necessary labels/audit information from these workflows/processes, that can feed into improved training and algorithm development process. Maintain multiple versions of the model and ensure the controlled release of models. Manage and mentor junior data scientists, providing guidance on best practices in data science methodologies and project execution. Lead cross-functional teams in the delivery of data-driven projects, ensuring alignment with business goals and timelines. Collaborate with stakeholders to define project objectives, deliverables, and timelines. **Skills required: ** MS/PhD from reputed institution with a delivery focus. 5+ years of experience in data science, with a proven track record of delivering impactful data-driven solutions. Delivered AI/ML products/features to production. Seen the complete cycle from Scoping & analysis, Data Ops, Modelling, MLOps, Post deployment analysis. Experts in Supervised and Semi-Supervised learning techniques. Hands-on in ML Frameworks - Pytorch or TensorFlow. Hands-on in Deep learning models. Developed and fine-tuned Transformer based models. ( Input output metric, Sampling technique) Deep understanding of Transformers, GNN models and its related math & internals. Exhibit high coding standards and create production quality code with maximum efficiency. Hands-on in Data analysis & Data engineering skills involving Sqls, PySpark etc. Exposure to ML & Data services on the cloud AWS, Azure, GCP Understanding internals of compute hardware - CPU, GPU, TPU is a plus. Can leverage the power of hardware accel to optimize the model execution — PyTorch Glow, cuDNN, is a plus.
Why This Role Matters: Data is the foundation of our business, and your work will ensure that we continue to deliver high-quality competitive intelligence at scale. Web platforms are constantly evolving, deploying sophisticated anti-bot measures—your job is to stay ahead of them. If you thrive on solving complex technical challenges and enjoy working with real-world data at an immense scale, this role is for you. We seek a Software Development Engineer with expertise in cloud infrastructure, Big Data and web crawling technologies. This role bridges system reliability engineering with scalable data extraction solutions, ensuring our infrastructure remains robust and capable of handling high-volume data collection. You will design resilient systems, optimize automation pipelines, and tackle challenges posed by advanced bot-detection mechanisms. Key Responsibilities: Architect, deploy, and manage scalable cloud environments (AWS/GCP/DO) to support distributed data processing solutions to handle terabyte-scale datasets and billions of records efficiently Automate infrastructure provisioning, monitoring, and disaster recovery using tools like Terraform, Kubernetes, and Prometheus. Optimize CI/CD pipelines to ensure seamless deployment of web scraping workflows and infrastructure updates. Develop and maintain stealthy web scrapers using Puppeteer, Playwright, and headless chromium browsers. Reverse-engineer bot-detection mechanisms (e.g., TLS fingerprinting, CAPTCHA solving) and implement evasion strategies. Monitor system health, troubleshoot bottlenecks, and ensure 99.99% uptime for data collection and processing pipelines. Implement security best practices for cloud infrastructure, including intrusion detection, data encryption, and compliance audits. Partner with data collection, ML and SaaS teams to align infrastructure scalability with evolving data needs Research emerging technologies to stay ahead of anti-bot trends including technologies like Kasada, PerimeterX, Akamai, Cloudflare, and more. Required Skills: 4–6 years of experience in site reliability engineering and cloud infrastructure management . Proficiency in Python, JavaScript for scripting and automation . Hands-on experience with Puppeteer/Playwright, headless browsers, and anti-bot evasion techniques . Knowledge of networking protocols, TLS fingerprinting, and CAPTCHA-solving frameworks . Experience with monitoring and observability tools such as Grafana, Prometheus, Elasticsearch, and familiarity with monitoring and optimizing resource utilization in distributed systems. Experience with data lake architectures and optimizing storage using formats such as Parquet, Avro, or ORC. Strong proficiency in cloud platforms (AWS, GCP, or Azure) and containerization/orchestration (Docker, Kubernetes). Deep understanding of infrastructure-as-code tools (Terraform, Ansible) . Deep experience in designing resilient data systems with a focus on fault tolerance, data replication, and disaster recovery strategies in distributed environments. Experience implementing observability frameworks, distributed tracing, and real-time monitoring tools. Excellent problem-solving abilities, with a collaborative mindset and strong communication skills.
Role & responsibilities Design, develop, and execute comprehensive test cases and test plans based on product requirements and user stories. Perform manual and automated testing to ensure software quality across web, mobile, and API layers. Identify, document, and track bugs and issues using tools like Jira, ensuring clear communication with development teams. Collaborate closely with developers, product owners, and other stakeholders during agile ceremonies to understand features and suggest appropriate test strategies. Build, maintain, and enhance automated test suites using frameworks such as Selenium, Cypress, or similar tools. Conduct various types of testing including regression, integration, performance, security, and usability testing as needed. Participate in code reviews to promote testing best practices, ensuring high standards of testability, scalability, and code quality. Serve as a quality advocate throughout the Software Development Life Cycle (SDLC), helping to ensure the timely delivery of reliable and high-quality releases. Preferred candidate profile 4-6 years of hands-on experience in quality assurance, testing web and mobile applications. Strong knowledge of QA methodologies, tools, and processes. Experience with test automation frameworks like Selenium, Cypress, or equivalent. Familiarity with API testing tools like Postman, or similar. Experience working in an Agile/Scrum development process. Strong analytical and problem-solving skills with attention to detail. Good communication skills, both written and verbal. Bonus: Experience in testing SaaS B2B applications.
Job Summary: We are seeking a highly motivated SDE 1 with solid experience in .NET and a passion for supporting and improving existing software platforms. In this role, you will work on stabilizing our current monolithic SaaS platform while addressing critical customer-facing issues, monitoring backend performance, and enhancing features as needed. Over time, you will also be involved in migrating to a new platform and expanding your skill set to include new technologies such as Python and modern tech stacks. Key Responsibilities: Platform Stabilization: Focus on stabilizing and maintaining the existing monolithic SaaS platform, ensuring high availability, reliability, and performance. Customer-Facing Bug Fixes: Address and resolve P0/P1 (critical and high-priority) bugs reported by customers in a timely and efficient manner. Exception Handling & Monitoring: Monitor system alarms and logs, proactively identifying backend exceptions and taking corrective actions to resolve issues. Minor Enhancements: Address minor feature enhancements and improvements based on customer feedback or internal requirements. Platform Migration Support: Assist in the gradual migration of the platform from monolithic architecture to a more scalable, modern platform. Adaptation to New Tech: Over time, expand your skill set to include Python and contribute to development on new technologies and platforms being adopted by the company. Collaboration: Work closely with other developers, QA engineers, and product managers across geographical locations to ensure smooth deployment of fixes and features. Troubleshooting & Debugging: Investigate and troubleshoot complex issues across the platform, from frontend to backend, and ensure optimal resolution. Qualifications & Required Skills: Bachelor's degree in Computer Science, Information Technology, or related field, or equivalent work experience. Experience: 2-4 years of professional experience in software development. Strong proficiency in .NET technologies (C#, ASP.NET, etc.). Knowledge of frontend web technologies: JavaScript, HTML, CSS. Experience with modern JavaScript frameworks: React and/or Angular. Familiarity with MongoDB or NoSQL databases. Experience with MS SQL Server and relational database design/queries. Hands-on troubleshooting with IIS (Internet Information Services) to resolve hosting and performance issues. Exposure to cloud platforms (Azure/AWS/GCP) is a plus. Python knowledge or willingness to learn over time. Understanding of monitoring tools and logging (e.g., Datadog, ELK stack, etc.) is a plus. Problem-Solving & Debugging: Strong ability to debug and troubleshoot complex issues across different layers (front-end, back-end, database). Adaptability & Learning : Willingness and eagerness to learn and adapt to new technologies over time, including Python and other modern stacks. Teamwork & Communication: Excellent verbal and written communication skills, with the ability to work effectively in a collaborative environment. Attention to Detail: Ability to focus on quality and thoroughness, ensuring customer-facing issues are resolved with minimal disruption. Preferred Must be open to work in US hours. Other Skills: Experience with microservices architecture is a plus. Familiarity with containerization (e.g., Docker) or orchestration tools (e.g., Kubernetes) is a plus. Exposure to CI/CD tools like Jenkins, GitLab CI, or similar is beneficial. Knowledge of DevOps practices or cloud-native development is an advantage. What We Offer: Growth Opportunity: Opportunity to expand your skillset, including learning Python and working on modern tech stacks in future projects. Learning Environment: Access to continuous learning through online resources, mentoring, and hands-on project work. Collaborative Culture: Be a part of a supportive, inclusive, and diverse team where your contributions are valued. Health & Wellness Benefits: Comprehensive healthcare coverage and wellness programs.
Key Responsibilities: Build & maintain high tps, reliable, performant and cost-effective data collection and extraction modules using Node.js & Python, using streaming solutions like Kafka. Deploy, maintain and support these modules on AWS & GCP cloud. Index, archive and retain necessary data in multiple persistence stores like Object stores(S3), Key value store (Dynamo DB), and Elastic Search based on the use case. Manage the quality of data collected using data quality libraries built using SQL/Python/Spark on AWS Glue and exposed as Dashboards for monitoring using AWS Quick sight and Kibana. Restfully abstract the data collected to the downstream applications through a Node.js backend. Collaborate well with engineers, researchers, and data implementation specialists to design and create advanced, elegant and efficient end to end competitive intelligence solutions. Qualifications & Experience: Proven experience 3+ years as a Software Development Engineer who has built, deployed and operationally supported systems in production. Excellent knowledge of programming languages such as Node.JS, Python Strong understanding of software design patterns, algorithms, and data structures Experience with SQL & NoSQL databases. Good communication and collaboration skills. Works with good ownership and accountability. Ability to work in a fast-paced and dynamic environment. Experience in writing high volume/tps, reliable crawlers and scrapers is a plus. Bachelor's or master's degree in computer science or a related field.
Job Summary: Building on the foundation of the SDE-I role, the DE- II position takes on a greater level of responsibility and leadership. You'll play a crucial role in driving the evolution and efficiency of our data collection and analytics platform, capable of handling terabyte-scale data and billions of data points. Key Responsibilities Lead the design, development, and optimization of large-scale data pipelines and infrastructures using technologies like Apache Airflow, Spark, Kafka, and more. Architect and implement distributed data processing solutions to handle terabyte-scale datasets and billions of records efficiently across multi-region cloud infrastructure (AWS, GCP, DO). Develop and maintain real-time data processing solutions for high-volume data collection operations using technologies like Spark Streaming and Kafka. Optimize data storage strategies using technologies such as Amazon S3, HDFS, and Parquet/Avro file formats for efficient querying and cost management. Build and maintain high-quality ETL pipelines, ensuring robust data collection and transformation processes with a focus on scalability and fault tolerance. Collaborate with data analysts, researchers, and cross-functional teams to define and maintain data quality metrics, implement robust data validation, and enforce security best practices. Mentor junior engineers (SDE-I) and foster a collaborative, growth-oriented environment. Participate in technical discussions, contributing to architectural decisions, and proactively identifying improvements for scalability, performance, and cost-efficiency. Ensure application performance monitoring (APM) is in place, utilizing tools like Datadog, New Relic, or similar to proactively monitor and optimize system performance, detect bottlenecks, and ensure system health. Implement effective data partitioning strategies and indexing for performance optimization in distributed databases such as DynamoDB, Cassandra, or HBase. Stay current with advancements in data engineering, orchestration tools, and emerging cloud technologies, continually enhancing the platform’s capabilities Qualifications & Experience: 4-5+ years of hands-on experience with Apache Airflow and other orchestration tools for managing large-scale workflows and data pipelines. Expertise in AWS technologies, Athena, AWS Glue, DynamoDB, Apache Spark, PySpark, SQL, and NoSQL databases. Experience in designing and managing distributed data processing systems that scale to terabyte and billion-scale datasets using cloud platforms like AWS, GCP, or Digital Ocean. Proficiency in web crawling frameworks, including Node.js, HTTP protocols, Puppeteer, Playwright, and Chromium for large-scale data extraction. Experience with monitoring and observability tools such as Grafana, Prometheus, Elasticsearch, and familiarity with monitoring and optimizing resource utilization in distributed systems. Strong understanding of infrastructure as code using Terraform, automated CI/CD pipelines with Jenkins, and event-driven architecture with Kafka. Experience with data lake architectures and optimizing storage using formats such as Parquet, Avro, or ORC. Strong background in optimizing query performance and data processing frameworks (Spark, Flink, or Hadoop) for efficient data processing at scale. Knowledge of containerization (Docker, Kubernetes) and orchestration for distributed system deployments. Deep experience in designing resilient data systems with a focus on fault tolerance, data replication, and disaster recovery strategies in distributed environments. Strong data engineering skills, including ETL pipeline development, stream processing, and distributed systems. Excellent problem-solving abilities, with a collaborative mindset and strong communication skills.
Job Summary: Data is the foundation of our business, and your work will ensure that we continue to deliver high-quality competitive intelligence at scale. Web platforms are constantly evolving, deploying sophisticated anti-bot measures—your job is to stay ahead of them. If you thrive on solving complex technical challenges and enjoy working with real-world data at an immense scale, this role is for you. We seek a Software Development Engineer with expertise in cloud infrastructure, Big Data and web crawling technologies. This role bridges system reliability engineering with scalable data extraction solutions, ensuring our infrastructure remains robust and capable of handling high-volume data collection. You will design resilient systems, optimize automation pipelines, and tackle challenges posed by advanced bot-detection mechanisms. Key Responsibilities: Architect, deploy, and manage scalable cloud environments (AWS/GCP/DO) to support distributed data processing solutions to handle terabyte-scale datasets and billions of records efficiently Automate infrastructure provisioning, monitoring, and disaster recovery using tools like Terraform, Kubernetes, and Prometheus. Optimize CI/CD pipelines to ensure seamless deployment of web scraping workflows and infrastructure updates. Develop and maintain stealthy web scrapers using Puppeteer, Playwright, and headless chromium browsers. Reverse-engineer bot-detection mechanisms (e.g., TLS fingerprinting, CAPTCHA solving) and implement evasion strategies. Monitor system health, troubleshoot bottlenecks, and ensure 99.99% uptime for data collection and processing pipelines. Implement security best practices for cloud infrastructure, including intrusion detection, data encryption, and compliance audits. Partner with data collection, ML and SaaS teams to align infrastructure scalability with evolving data needs Qualifications & Experience: 4–6 years of experience in site reliability engineering and cloud infrastructure management Proficiency in Python, JavaScript for scripting and automation . Hands-on experience with Puppeteer/Playwright, headless browsers, and anti-bot evasion techniques . Knowledge of networking protocols, TLS fingerprinting, and CAPTCHA-solving frameworks . Experience with monitoring and observability tools such as Grafana, Prometheus, Elasticsearch, and familiarity with monitoring and optimizing resource utilization in distributed systems. Experience with data lake architectures and optimizing storage using formats such as Parquet, Avro, or ORC. Strong proficiency in cloud platforms (AWS, GCP, or Azure) and containerization/orchestration (Docker, Kubernetes). Deep understanding of infrastructure-as-code tools (Terraform, Ansible) . Deep experience in designing resilient data systems with a focus on fault tolerance, data replication, and disaster recovery strategies in distributed environments. Experience implementing observability frameworks, distributed tracing, and real-time monitoring tools. Excellent problem-solving abilities, with a collaborative mindset and strong communication skills.