Job Summary: We are looking for a highly motivated and skilled Assistant Manager with strong experience in Data Analysis domain to join our growing team with demonstrated experience in handling large data sets and relational databases. Job Responsibilities: Demonstrate strong analytical and statistical skills towards taking complex business problems, wide variety of quantitative data, and provide structured and data-supported practical solutions towards optimizing shipping problems. Serve as a trusted partner to business units, translating complex analytical results into business insights, create necessary tools/reports to monitor and ensure program success. Ability to interpret complex data sets with the Strong Problem-solving skills. High level of attention to detail to ensure accuracy in data analysis and documentation. Proven track record of using analytics to drive significant business impact, creating deep-dives and presenting work efficiently. Ability to work with stakeholders to assess potential risks. Ability to analyze existing tools and databases and provide software solution recommendations. Ability to translate business requirements into non-technical, lay terms. Required Skills: B.Tech in a quantitative field like Engineering, Computer Science, Mathematics etc. 5+ years of relevant analytics experience - Strong analytical skills with expertise in SQL, and Excel. Proficiency in Python hands-on. A/B testing, very good ability with Quant (prior experience in Business performance analytics roles good to have) Campaign/promo measurement Good communication skills, with experience working in a fast-paced environment with cross-functional teams. We are an equal opportunity employer and value diversity at our company. We do not discriminate based on race, religion, colour, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
Experience: 4 to 6 years Job Summary: We are looking for a Senior Analyst with strong expertise in Full Stack Development to join our growing team. The ideal candidate will have hands-on experience with both frontend and backend technologies, and a passion for building scalable, efficient, and user-centric web applications. In this role, you will work closely with cross-functional teams to design, develop, and maintain high-performance applications, ensuring seamless integration and delivery of features. Job Responsibilities: Implement full software development process. Work with data streams and APIs to provide enhanced automation capabilities. Develop flowcharts, layouts, and documentation to identify requirements and solutions. Gather user requirements and technical requirements. Write well-designed, testable code. Develop software verification plans and quality assurance procedures. Document and maintain software functionality. Required Skills: Frontend Technologies: Proficiency in HTML, CSS, and JavaScript, along with frameworks like React or Angular. Backend Technologies: Strong knowledge of Node.js, Python and experience with backend frameworks such as Express.js or Django. Docker: Experience with Docker for containerization, including creating and managing Docker containers, Docker Compose, and Docker Swarm. Database Management: Proficiency in SQL (knowledge of NoSQL databases is advantage) and experience with data warehousing solutions such as Azure Synapse Analytics2. Cloud Platforms: Knowledge of Azure Databricks cloud services preferred, specifically services related to data storage, processing, and deployment. Version Control: Proficiency in using Git and GitHub for version control and collaboration. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Role & responsibilities Preferred candidate profile
Job Description: We are looking for a highly motivated and skilled L4 Data Engineer Manager, particularly with Databricks experience, is generally responsible for leading data engineering projects, overseeing a team of data engineers, and designing data architectures on platforms like Databricks. Responsibilities: Leadership and Team Management: Lead, mentor, and develop a team of data engineers, ensuring high performance and career growth. Collaborate with stakeholders across data science, analytics, and engineering teams to deliver high-impact data solutions. Oversee the data engineering project lifecycle, from requirements gathering to deployment, ensuring quality and timeliness. Data Architecture and Strategy: Develop scalable data architectures and solutions on Databricks for ETL processes, data warehousing, and big data processing. Define and enforce data governance policies, best practices, and standards for data processing and management. Design data flow pipelines for efficient data ingestion, storage, and processing in cloud environments (e.g., AWS, Azure, GCP). Databricks Management and Optimization: Optimize Databricks clusters for cost and performance efficiency, leveraging cluster scaling and resource management best practices. Implement advanced data transformations and data models within Databricks using Spark and Delta Lake. Ensure integration between Databricks and other data tools, such as data lakes, SQL databases, and BI tools. Data Quality and Security: Monitor and ensure data quality, reliability, and security within the Databricks environment. Implement data validation checks, data profiling, and error-handling mechanisms. Collaborate with security teams to ensure compliance with data privacy regulations and internal security standards. Technical Development and Innovation: Stay updated with the latest Databricks capabilities and cloud technology trends to introduce innovative data engineering practices. Develop reusable and efficient code, libraries, and tools to automate and streamline data workflows. Troubleshoot and resolve complex data pipeline issues and provide continuous improvements in performance and reliability. Skills: Advanced experience with Databricks, PySpark and data pipeline frameworks. Proficiency in Python, SQL, and/or Scala. Familiarity with cloud platforms like AWS, Azure, or GCP. 8+ years in data engineering, with 2+ years in a leadership or managerial role. Strong experience in data architecture, data pipeline optimization, and data modeling. Proven experience in managing large-scale data processing systems and ETL pipelines. Familiarity with Airflow for workflow orchestration and experience with Linux administration and shell scripting. Excellent communication skills to collaborate effectively with cross-functional teams. Strong problem-solving abilities and attention to detail.
Job Description: We are looking for a talented and experienced Senior Data Scientist with a minimum of 4 years of professional experience in model development and expertise in Gen AI projects. As a Senior Data Scientist, you will be responsible for developing advanced machine learning models, conducting exploratory data analysis (EDA), performing feature selection/reduction, and utilizing cutting-edge technologies to deliver high-quality solutions. The ideal candidate should possess strong programming proficiency in Python and experience with cloud platforms like GCP or equivalent, as well as visualization tools like Qlik Sense, Power BI, Looker Studio, or Tableau. Job Responsibility: Data Scientists 3+ Year (Mandatory Skills) Model development (regression and classification), should have strong experience in performing EDA, Feature selection/Feature Reduction, building models and evaluate its performance Should have worked on Gen AI project Strong programming proficiency in python with mastery of data science libraries Pandas, numpy and scikitlearn & xgboost or pytorch GCP (BQ, Vertex AI) or other equivalent cloud Visualization (Qlik Sense / Looker Studio / Power BI / Tableau) Skills : GenAI & RAG Model development (regression and classification) Machine Learning Python GCP / other equivalent cloud Visualization LLM Model building We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Interview Date: 17th May 2025 (Saturday), Face to Face In person Interview Chennai.
Experience: 4 to 6 years Location: Bengaluru Job Description: We are looking for a Senior Analyst with strong knowledge in FSD, Frontend, Backend skillsets to join our growing team. Responsibilities: Implement full software development process. Work with data streams and APIs to provide enhanced automation capabilities. Develop flowcharts, layouts, and documentation to identify requirements and solutions. Gather user requirements and technical requirements. Write well-designed, testable code. Develop software verification plans and quality assurance procedures. Document and maintain software functionality. Skills: Frontend Technologies: Proficiency in HTML, CSS, and JavaScript, along with frameworks like React or Angular. Backend Technologies: Strong knowledge of Node.js, Python and experience with backend frameworks such as Express.js or Django. Docker: Experience with Docker for containerization, including creating and managing Docker containers, Docker Compose, and Docker Swarm. Database Management: Proficiency in SQL (knowledge of NoSQL databases is advantage) and experience with data warehousing solutions such as Azure Synapse Analytics2. Cloud Platforms: Knowledge of Azure Databricks cloud services preferred, specifically services related to data storage, processing, and deployment. Version Control: Proficiency in using Git and GitHub for version control and collaboration. Role & responsibilities Preferred candidate profile
Job Description: Responsibilities: Analyze marketing data to optimize campaigns, customer segmentation, and performance metrics. Conduct A/B testing to evaluate marketing strategies and user experiences. Develop and maintain dashboards in Power BI for tracking key performance indicators (KPIs). Utilize SQL to extract, manipulate, and analyze large datasets for actionable insights. Monitor and report on website and campaign performance using Google Analytics. Collaborate with cross-functional teams to provide data-driven recommendations. Requirements: Proficiency in SQL, Excel, A/B Testing, Power BI, and Google Analytics. Strong analytical skills and ability to interpret complex datasets. Experience with digital marketing channels, customer segmentation, and campaign tracking. Ability to communicate insights effectively to both technical and non-technical stakeholders. Skills Required: Marketing Analytics, SQL,A/B testing, Power BI, Google Analytics, Communication Tableau, Dashboard, Data Pipeline, Web Analytics, Amplitude
Job Description: Role Overview: We are looking for an experienced and strategic Assistant Manager Total Rewards with 6 to 10 years of experience to lead and manage the organization's compensation and benefits programs. The ideal candidate will have a proven track record in designing competitive total rewards strategies, including sales incentive schemes and retention programs, while ensuring compliance and alignment with organizational goals. Qualifications and Skills: 6 to 10 years of experience in compensation, benefits, sales incentives, and retention programs. Bachelor's degree in Human Resources, Business Administration, or a related field (MBA/PGDM preferred). Strong expertise in compensation frameworks, benefits design, and statutory compliance. Proficiency in advanced Excel (e.g., pivot tables, VLOOKUP, data modeling) and familiarity with HRIS platforms. Excellent communication, analytical, and stakeholder management skills. Proven ability to lead and implement strategic rewards initiatives in a dynamic environment. Why Join Us? Opportunity to lead and shape impactful total rewards strategies. A collaborative work environment with growth opportunities. Competitive benefits and a platform for professional advancement. If you are a strategic thinker passionate about designing and managing effective total rewards programs, we encourage you to apply and be part of our dynamic team!
Job Description: We are seeking a highly experienced and skilled Senior Data Engineer to join our dynamic team. This role requires hands-on experience with databases such as Snowflake and Teradata, as well as advanced knowledge in various data science and AI techniques. The successful candidate will play a pivotal role in driving data-driven decision-making and innovation within our organization. Roles and Responsibilities: Design, develop, and implement advanced machine learning models to solve complex business problems. Apply AI techniques and generative AI models to enhance data analysis and predictive capabilities. Utilize Tableau and other visualization tools to create insightful and actionable dashboards for stakeholders. Manage and optimize large datasets using Snowflake and Teradata databases. Collaborate with cross-functional teams to understand business needs and translate them into analytical solutions. Stay updated with the latest advancements in data science, machine learning, and AI technologies. Mentor and guide junior data scientists, fostering a culture of continuous learning and development. Communicate complex analytical concepts and results to non-technical stakeholders effectively. We are an equal opportunity employer and value diversity at our company. We do not discriminate based on race, religion, colour, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
ComfortableJob Description: We are seeking an experienced Data Engineer to maintain and optimize our on-premises data warehouse environment. This role involves day-to-day support of production systems, ownership of ETL pipelines, and delivery of automated operational reports. The ideal candidate will have deep expertise in traditional data warehousing methodologies, SQL, and be comfortable working in Client facing environments. Job Responsibility: Data Warehousing & ETL Proven experience with on-premises data warehousing solutions (e.g., MSSQL Server, Oracle). Hands-on expertise in ETL tools such as SSIS, Informatica, or similar. SQL & Database Management Strong T-SQL skills for query development and optimization. Experience with creating and maintaining stored procedures, triggers, and complex queries. Understanding of database concepts like indexing, partitioning, and locking. Production Environment Experience Comfortable working in a high-pressure environment with strict SLA requirements. Proven track record in handling production issues, performing root-cause analysis, and implementing solutions. Reporting & BI Tools Knowledge of operational reporting workflows; ability to manage automated reporting pipelines. Familiarity with reporting tools (e.g., SSRS, Crystal Reports, Power BI) for troubleshooting and support, though the primary focus is not dashboard creation. Problem-Solving & Troubleshooting Ability to quickly diagnose data issues in complex data flows. Strong analytical skills for performance tuning and error resolution. Communication & Collaboration Classification | INTERNAL Excellent verbal and written communication skills for coordinating with business stakeholders. Ability to work cross-functionally with multiple teams, including analysts, developers, and IT operations. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
Job Description: We are seeking a highly skilled and motivated Data Scientist with strong expertise in DS Concepts & GenAI. You will play a key role in lead and manage the projects for developing AI-powered applications, tools, and workflows that solve real-world business problems. Responsibilities: Data Science & Analytics Contribute to the development of machine learning models, statistical analyses, and AI solutions. Review and validate models for accuracy, scalability, and business relevance. Ensure best practices in data preprocessing, feature engineering, and model evaluation. Stay updated with the latest advancements in data science, ML, and AI technologies. GenAI & Agentic AI (Good to have) Good knowledge of LLMs (Large Language Models), RAG (Retrieval-Augmented Generation), GenAI models and Agentic AI frameworks Implement Agentic AI systems that can autonomously perform tasks, reason, and adapt Lead projects involving text generation, chatbots, synthetic data creation, and AI-driven works Project & Team Management: Manage data science projects, ensuring timely delivery and alignment with business goals. Coordinate between data scientists, engineers, and business stakeholders to define project requirements. Track project progress, identify risks, and propose mitigation strategies. Support the hiring, onboarding, and mentoring of junior data scientists. Tools & Technologies: Evaluate and recommend data science tools, platforms, and libraries. Assist in managing cloud-based ML environments (AWS, Azure, GCP) or on-prem solutions. Support MLOps processes for model deployment and monitoring. Skills: Strong working experience in Python/R, SQL, Data Science libraries (Pandas, NumPy, Scikit-learn, etc), ML frameworks (TensorFlow, PyTorch, etc) Familiar with big data tools (Spark, Hadoop) and cloud platforms (AWS/Azure/GCP). Knowledge of MLOps, CI/CD pipelines, and model deployment. Proficiency in data visualization (PowerBI/Tableau, Matplotlib/Seaborn). Excellent Communication, Leadership, and Problem-solving skills. Bachelors/Master’s degree in Statistics, Mathematics, Economics, Data Science, Computer Science or related fields. Hands-on Data Science project experience is a must and Data Science/AI Certifications will be a huge plus. We are an equal opportunity employer and value diversity at our company. We do not discriminate based on race, religion, colour, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
Job Description: We are looking for a talented and experienced Senior Data Scientist with a minimum of 4 years of professional experience in model development and expertise in Gen AI projects. As a Senior Data Scientist, you will be responsible for developing advanced machine learning models, conducting exploratory data analysis (EDA), performing feature selection/reduction, and utilizing cutting-edge technologies to deliver high-quality solutions. The ideal candidate should possess strong programming proficiency in Python and experience with cloud platforms like GCP or equivalent, as well as visualization tools like Qliksense, Power BI, Looker Studio, or Tableau. Job Responsibility: Data Scientists 4+ Year (Mandatory Skills) Model development (regression and classification), should have strong experience in performing EDA, Feature selection/Feature Reduction, building models and evaluate its performance Should have worked on Gen AI project Strong programming proficiency in python with mastery of data science libraries Pandas, numpy and scikitlearn GCP (BQ, Vertex AI) or other equivalent cloud Visualization (Qliksense / Looker Studio / Power BI / Tableau) Skills: GenAI Model development (regression and classification) EDA Python GCP / other equivalent cloud Visualization Data Scientist We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
Job Description: We're seeking a dynamic Employer Branding & Internal Communications Specialist to craft LatentViews talent narrative and lead initiatives that build a strong internal culture and external employer brand. This role will focus on elevating LatentView’s Employee Value Proposition (EVP), enhancing candidate perception, and driving employee engagement through compelling storytelling and communication. Key Responsibilities: Employer Branding Strategy Define, evolve, and operationalize LatentView’s Employee Value Proposition (EVP) Build and own the annual employer branding roadmap, aligned with talent priorities and business vision. Benchmark market trends and position LatentView competitively in the talent ecosystem Storytelling & Content Creation Curate and create high-impact content (videos, blogs, spotlight features, leadership voices, #LifeAtLatentView stories). Develop creative assets for both internal and external use, maintaining tone, brand guidelines, and consistency. Social Media & Digital Campaigns Manage and grow LatentView’s career social handles (LinkedIn, Instagram, YouTube). Conceptualize and execute talent campaigns Partner with TA and Marketing on recruitment marketing, Glassdoor strategy, and employer review sites. Internal Communications Lead HR communications across people programs—recognition, benefits, DEI, wellness, onboarding, leadership connects. Create toolkits, newsletters, and intranet content to foster clarity, alignment, and employee engagement. Collaborate with HRBPs and leaders to drive timely messaging on org updates and change communication. Skills: 4 to 8 years in the Talent Management domain Content Creation, Story Telling, Design creation, creative mindset, Design technology, Visual Branding, Stakeholder alignment We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
Job Summary: We are looking for a dynamic and experienced Full Stack Developer with a strong background in end-to-end product development. The ideal candidate will have hands-on experience across all stages of the software development lifecyclefrom requirement gathering and system design to deployment and post-release support. This role demands proficiency in both front-end and back-end technologies, cloud infrastructure, and containerization tools. You should be passionate about building scalable, secure, and maintainable software solutions and have the ability to take initiative, contribute to architectural decisions, and support DevOps practices in a fast-paced environment. Key Responsibilities: Work as part of a cross-functional team to design, develop, and maintain scalable web applications. Collaborate with stakeholders to understand business requirements and translate them into technical specifications. Develop responsive user interfaces using modern JavaScript frameworks such as Angular or React . Build robust back-end services and APIs using Python and interact with databases using SQL . Deploy and monitor applications on cloud platforms such as Google Cloud Platform (GCP) or AWS . Implement CI/CD pipelines and use containerization technologies like Docker , Podman , and Kubernetes . Write clean, maintainable, and well-documented code following best practices. Participate in code reviews, debugging, and performance tuning of applications. Take initiative in identifying improvements and automation opportunities across the development lifecycle. Maintain ownership of features throughout their lifecycle, from development to production support. Mandatory Technical Requirements: Frontend: Strong hands-on experience (4+ years) with at least one modern JavaScript framework: Angular (preferred) React Backend: Proficient in Python development (3+ years) Strong experience with SQL (2+ year), including writing complex queries and optimizing performance Cloud Platforms: Minimum of 2 year experience with cloud platforms (preferably GCP, or AWS) Familiarity with Terraform or other Infrastructure-as-Code (IaC) tools is a plus DevOps & Containerization: Experience with Docker and Podman Exposure to Kubernetes for container orchestration Working knowledge of CI/CD pipelines Familiarity with version control systems, especially Git Preferred Skills & Qualities: Demonstrated experience leading or initiating technical improvements or innovations within a team or organization Familiarity with microservices architecture Understanding of RESTful APIs and API design best practices Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack) Exposure to Agile methodologies and working in a collaborative team environment Strong problem-solving and analytical skills Excellent communication and interpersonal abilities Nice-to-Have (Bonus) Skills: Experience with test automation frameworks (e.g., PyTest, Cypress, Jest) Exposure to NoSQL databases (e.g., MongoDB, Firebase) Knowledge of security best practices in web development Educational Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, or a related field (or equivalent work experience) We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
Role & responsibilities We are seeking a skilled Data Engineer to maintain robust data infrastructure and pipelines that support our operational analytics and business intelligence needs. Candidates will bridge the gap between data engineering and operations, ensuring reliable, scalable, and efficient data systems that enable data-driven decision making across the organization. Strong proficiency in Spark SQL, hands-on experience with realtime Kafka, Flink Databases: Strong knowledge of relational databases (Oracle, MySQL) and NoSQL systems Proficiency with Version Control Git, CI/CD practices and collaborative development workflow Strong operations management and stakeholder communication skills Flexibility to work cross time zone Have cross-cultural communication mindset Experience working in cross-functional teams Continuous learning mindset and adaptability to new technologies Preferred candidate profile Bachelor's degree in Computer Science, Engineering, Mathematics, or related field 3+ years of experience in data engineering, software engineering, or related role Proven experience building and maintaining production data pipelines Expertise in Hadoop ecosystem - Spark SQL, Iceberg, Hive etc. Extensive experience with Apache Kafka, Apache Flink, and other relevant streaming technologies. Orchestrating tools - Apache Airflow & UC4, Proficiency in Python, Unix or similar languages Good understanding of SQL, oracle, SQL server, Nosql or similar languages Proficiency with Version Control Git, CI/CD practices and collaborative development workflows Preferrable immeidate joiner to less than 30days np
Job Description: We are seeking a highly skilled and proactive Data Analyst with strong technical expertise and a deep understanding of data-driven solutions. The ideal candidate is a problem solver with hands-on experience in SQL, Python, ETL pipelines, and business intelligence toolspreferably within a Google Cloud environment. Key Responsibilities: Design, develop, evaluate, deploy, and document robust data management and business intelligence systems. Collaborate with business stakeholders and product managers to gather and understand requirements. Build scalable, maintainable, and reusable solutions to support analytical and reporting needs. Design and implement analytics environments, utilizing both third-party and in-house reporting tools. Model metadata, create reports and dashboards, and provide stakeholders with timely and structured access to insights. Ensure system architecture meets performance, availability, and scalability standards. Conduct thorough testing of data pipelines, tool designs, data transformations, and infrastructure components. Document development processes and implement data solutions to empower business users. Required Skills & Qualifications: Experience: 4 to 6 years in the Data Analytics domain. Technical Proficiency: Strong in both basic and advanced SQL programming. Demonstrated expertise in Python . Hands-on experience with ETL development , preferably in Google Cloud Platform (GCP) . Exposure to data visualization tools, particularly Tableau (Public) . Business Acumen: Ability to understand complex business needs and translate them into effective data solutions. Soft Skills: Excellent communication and collaboration skills. Ability to work independently with minimal supervision. Quick learner with good judgment in choosing between tactical and strategic solutions. Proactive and detail-oriented approach We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
Senior Business Analyst: The incumbent for the position is expected to deliver but not limited to on following responsibilities: Set up processes for data management, templatized analytical modules/deliverables.Continuous ly improve processes with focus on automation and partner with different teams todevelop system capability Understand business briefs clearly and execute new/ad-hoc projects and ensure timely delivery Keep managers informed about progress on projects and proactively flag gaps on data availability, hiccups on analysis Develop and enhance statistical model with best in class modelling techniques Managing the project in Gannt Chart Deliver on informative and well-organized deliverables Proactively seek opportunities to help team members by sharing knowledge and expanding skills Ability to communicate difficult/sensitive information tactfully
Job Description: We are looking for a highly motivated and results-driven Data Scientist to join our team. You will work closely with business stakeholders, product managers, and engineers to uncover insights, build predictive models, and design data-driven solutions that address critical business challenges. Responsibilities: Work on structured and unstructured datasets to derive actionable business insights. Design and develop predictive models using machine learning algorithms. Collaborate with cross-functional teams to identify opportunities for leveraging data. Create data pipelines and automate reporting using Python, SQL, and relevant tools. Present findings and recommendations to both technical and non-technical stakeholders. Develop proof of concepts (PoCs) for AI/ML use cases across domains. Monitor model performance and implement improvements as needed. Skills: Bachelor's or Masters degree in Computer Science, Statistics, Mathematics, or related field 5+ years of experience in data science, analytics, or machine learning Proficiency in Python (pandas, scikit-learn, numpy) and SQL Hands-on experience with ML algorithms (regression, classification, clustering, etc.) Strong understanding of statistics, data modeling, and hypothesis testing Experience with visualization tools (Tableau, Power BI, matplotlib, seaborn) Familiarity with cloud platforms like AWS, GCP, or Azure is a plus Excellent problem-solving and communication skills We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
Job Summary: As a Sales Operations Analytics / Data Analyst, you will be a pivotal force in this growth, sitting at the intersection of data, strategy, and execution. You will empower our sales leadership and teams with the critical insights needed to optimize performance, identify new opportunities, and drive strategic decision-making. This role demands a highly analytical mind, a passion for data storytelling, and the ability to translate complex data into actionable recommendations that directly impact our top-line growth and operational efficiency within the dynamic Retail landscape. Job Description Performance Analysis & Reporting: Design, develop, and maintain robust dashboards, reports, and analytical models to track key sales metrics, identify trends, and provide deep insights into sales performance across various segments (e.g., merchant acquisition, retention, growth, regional performance). Strategic Insights & Recommendations: Proactively identify opportunities for sales process improvements, efficiency gains, and revenue growth by analyzing sales data,market trends, and operational workflows. Present clear, concise, and actionable recommendations to sales leadership. Forecasting & Planning Support: Contribute to sales forecasting, capacity planning, and target setting processes by leveraging historical data, statistical models, and market intelligence to provide accurate projections. Data Infrastructure & Tools: Collaborate with data engineering and business intelligence teams to ensure data integrity, accessibility, and the development of scalable data solutions. Utilize SQL, Python, and advanced visualization tools (e.g., Tableau, Power BI) to extract, transform, and present data. Sales Optimization: Analyze sales funnel performance, conversion rates, and sales cycle efficiency to pinpoint bottlenecks and recommend solutions that streamline operations and enhance productivity. Ad-Hoc Analysis: Conduct deep-dive analyses on specific business questions or challenges, providing timely and accurate data-driven answers to support urgent strategic decisions. Preferred qualifications: 3+ years of experience in Sales Operations, Business Intelligence, Data Analytics, or a similar analytical role, with a strong emphasis on sales performance. Demonstrated experience in the Grocery, Retail, E-commerce, or Q-commerce industry is highly preferred. Proven track record of translating complex data into clear, actionable insights and presenting them to senior leadership. Proficiency in SQL for data extraction and manipulation is required. Proficiency with data visualization tools (e.g., Tableau, Power BI, Looker) for dashboard creation and reporting. Analytical programming skills in Python (Pandas, NumPy) or R for advanced data analysis and modeling. Experience with CRM systems (e.g., Salesforce) and understanding of sales data structures. Analytical & Problem-Solving Skills: Exceptional analytical and quantitative skills, with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention t0 detail and accuracy. Strong problem-solving abilities, capable of tackling complex business challenges with a data-driven approach. Communication & Interpersonal Skills: Excellent written and verbal communication skills, with the ability to articulate complex analytical concepts to non-technical stakeholders. Strong interpersonal skills, with the ability to build relationships and influence cross-functional teams Bachelor & degree in Business, Economics, Finance, Statistics, Computer Science, or a related quantitative field. Master's degree is a plus. MBA from a top-tier school We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
Job Description: We are looking for a highly motivated and skilled Data Analyst candidate with prior experience in data driven analysis and insights to join our growing team. Responsibilities: MLOps Platform Development: Design, build, and maintain our MLOps platform, enabling efficient development, deployment, and lifecycle management of machine learning models. AWS SageMaker Expertise: Leverage AWS SageMaker extensively to build, train, deploy, and manage machine learning models at scale. This includes utilizing SageMaker features for data labeling, feature engineering, model training, endpoint deployment, and monitoring. Cloud Architecture (AWS): Architect and implement scalable, secure, and cost-effective cloud solutions on AWS to support ML workloads, data pipelines, and application services. Python Development: Write high-quality, production-ready Python code for developing ML pipelines, MLOps tooling, API integrations, and automation scripts. Containerization & Orchestration (Kubernetes): Design and implement containerized ML services using Docker and orchestrate them with Kubernetes for scalable and resilient deployments. CI/CD for ML: Establish and maintain robust CI/CD pipelines for machine learning models and infrastructure, automating testing, deployment, and versioning. Machine Learning Operations (MLOps): Implement best practices for MLOps, including model versioning, lineage tracking, performance monitoring, drift detection, and automated retraining. Collaboration: Work closely with data scientists to transition models from research to production, ensuring operational readiness and performance. Collaborate with software engineers to integrate ML services into core products. Monitoring & Alerting: Implement comprehensive monitoring, logging, and alerting solutions for ML models and infrastructure to ensure high availability and performance. Performance Optimization: Continuously optimize the performance, scalability, and cost-efficiency of ML infrastructure and deployed models. Skills: Proven knowledge and expertise in AWS Sagemaker, MLOps, ML Hands on exposure and very good understanding of Python, Kubernetes, CI/CD concepts. Stakeholder Communication, Business Insights We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.
Job Summary: We are looking for a highly skilled Data Engineer with expertise in Databricks, Python, PySpark, and SQL to join our data platform team. The ideal candidate will be responsible for building scalable data pipelines, enabling advanced analytics, and supporting data-driven decision-making across the organization. Job Responsibility: Data Engineering & Architecture Design and implement scalable and optimized data pipelines on Databricks using Delta Lake, PySpark, and SQL. Develop ETL/ELT frameworks for batch and streaming data processing. Ensure data quality, governance, and observability using Unity Catalog, Great Expectations, or custom validations. Optimize Spark jobs for performance, cost, and scalability. Cloud & Infrastructure (Azure/AWS/GCP) Deploy and manage Databricks clusters, workspaces, and Jobs. Work with Terraform or ARM templates for infrastructure automation. Integrate cloud-native services like Azure Data Factory, AWS Glue, or GCP Cloud Composer. MLOps & CI/CD Automation Implement CI/CD pipelines for Databricks notebooks, workflows, and ML models. Work with MLflow for model tracking and lifecycle management. Automate data pipelines using Azure DevOps, GitHub Actions, or Jenkins. Leadership & Collaboration Lead a team of data engineers, ensuring best practices and code quality. Collaborate with data scientists, analysts, and business teams to understand requirements. Conduct performance reviews, technical mentoring, and upskilling sessions. Skills: Strong hands-on experience in Databricks, Apache Spark (PySpark/Scala), and Delta Lake. Expertise in SQL, ETL/ELT pipelines, and data modelling. Experience with Azure, AWS, or GCP cloud platforms. Knowledge of MLOps, MLflow, and CI/CD best practices. Experience in workflow orchestration using Databricks Workflows, Airflow, or Prefect. Understanding of cost optimization, cluster tuning, and performance monitoring in Databricks. Strong leadership, stakeholder management, and mentoring skills. Experience with data lakehouse architectures and Unity Catalog. Hands-on with Terraform, Infrastructure-as-Code (IaC), or Kubernetes. Familiarity with data governance, security, and privacy frameworks.
FIND ON MAP