Home
Jobs

1123 Snowflake Jobs - Page 20

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

2 - 6 Lacs

Pune

Work from Office

Naukri logo

Job Information Job Opening ID ZR_2098_JOB Date Opened 13/01/2024 Industry Technology Job Type Contract Work Experience 5-8 years Job Title DCT Data Engineer City Pune City Province Maharashtra Country India Postal Code 411001 Number of Positions 4 LocationsPune, Bangalore, Indore Work modeWork from Office Informatica data quality - idq Azure databricks Azure data lake Azure Data Factory Api integration check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 2 weeks ago

Apply

5.0 - 10.0 years

12 - 15 Lacs

Mumbai Suburban, Navi Mumbai, Mumbai (All Areas)

Work from Office

Naukri logo

Candidate with 5–8 years of overall experience in software engineering, including 2-4 years of focused expertise in Data Governance practices such as data cataloguing, data lineage, data quality, data purging, data reliability, data accessibility. Required Candidate profile Strong experience with data governance tools (Ab Initio, Informatica, Atlan, Collibra, Snowflake,Databricks) Hands-on knowledge of data cataloguing, data lineage, data quality, and data access control Perks and benefits To be disclosed post interview

Posted 2 weeks ago

Apply

7.0 - 12.0 years

13 - 22 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

Talend developer DatawarehouseBI Data Warehouse implementation Unit Testing troubleshooting ETLTalend DataStage ETL Data Catalog cloud database snowflakedeveloping Data Marts,Data warehousing Operational DataStoreDWH concepts,PerformanceTuning Query

Posted 2 weeks ago

Apply

3.0 - 5.0 years

12 - 16 Lacs

Chennai

Work from Office

Naukri logo

G6 Cloud Data Architect (Snowflake + DBT), this is for pro-active hiring

Posted 2 weeks ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a detail-oriented and highly skilled Data Engineering Test Automation Engineer to ensure the quality, reliability, and performance of our data pipelines and platforms. The ideal candidate will have a strong background in data testing , ETL validation , and test automation frameworks . You will work closely with data engineers, analysts, and DevOps teams to build robust test suites for large-scale data solutions. This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, data accuracy, completeness, consistency , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities Design, develop, and maintain automated test scripts for data pipelines, ETL jobs, and data integrations. Validate data accuracy, completeness, transformations, and integrity across multiple systems. Collaborate with data engineers to define test cases and establish data quality metrics. Develop reusable test automation frameworks and CI/CD integrations (e.g., Jenkins, GitHub Actions). Perform performance and load testing for data systems. Maintain test data management and data mocking strategies. Identify and track data quality issues, ensuring timely resolution. Perform root cause analysis and drive corrective actions. Contribute to QA ceremonies (standups, planning, retrospectives) and drive continuous improvement in QA processes and culture. Must-Have Skills Experience in QA roles, with strong exposure to data pipeline validation and ETL Testing. Domain Knowledge of R&D domain of life science. Validate data accuracy, transformations, schema compliance, and completeness across systems using PySpark and SQL . Strong hands-on experience with Python, and optionally PySpark, for developing automated data validation scripts. Proven experience in validating ETL workflows, with a solid understanding of data transformation logic, schema comparison, and source-to-target mapping. Experience working with data integration and processing platforms like Databricks/Snowflake, AWS EMR, Redshift etc Experience in manual and automated testing of data pipelines executions for both batch and real-time data pipelines. Perform performance testing of large-scale complex data engineering pipelines. Ability to troubleshoot data issues independently and collaborate with engineering teams for root cause analysis Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Hands-on experience with API testing using Postman, pytest, or custom automation scripts Experience integrating automated tests into CI/CD pipelines using tools like Jenkins, GitHub Actions, or similar. Knowledge of cloud platforms such as AWS, Azure, GCP. Good-to-Have Skills Certifications in Databricks, AWS, Azure, or data QA (e.g., ISTQB). Understanding of data privacy, compliance, and governance frameworks. Knowledge of UI automated testing frameworks like Selenium, JUnit, TestNG Familiarity with monitoring/observability tools such as Datadog, Prometheus, or Cloud Watch Education and Professional Certifications Masters degree and 3 to 7 years of Computer Science, IT or related field experience OR Bachelors degree and 4 to 9 years of Computer Science, IT or related field experience Soft Skills Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 2 weeks ago

Apply

4.0 - 5.0 years

8 - 12 Lacs

Navi Mumbai, Mumbai (All Areas)

Hybrid

Naukri logo

Deep understanding of architecture, query optimization, and storage techniques. SQL & Scripting: Strong SQL skills and proficiency in Python or Shell scripting. Hands-on experience with cloud platforms Azure Data Factory, AWS, GCP. Required Candidate profile Knowledge of star schema, data modeling, and warehousing best practices. Familiarity with access controls, encryption, and compliance frameworks. Strong analytical and troubleshooting skills. Perks and benefits Performance Linked allowance. Annual Bonus

Posted 2 weeks ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

We are looking for a Data Engineer with experience in data warehouse projects, strong expertise in Snowflake , and hands-on knowledge of Azure Data Factory (ADF) and dbt (Data Build Tool). Proficiency in Python scripting will be an added advantage. Key Responsibilities: Design, develop, and optimize data pipelines and ETL processes for data warehousing projects. Work extensively with Snowflake, ensuring efficient data modeling, and query optimization. Develop and manage data workflows using Azure Data Factory (ADF) for seamless data integration. Implement data transformations, testing, and documentation using dbt. Collaborate with cross-functional teams to ensure data accuracy, consistency, and security. Troubleshoot data-related issues. (Optional) Utilize Python for scripting, automation, and data processing tasks. Required Skills & Qualifications: Experience in Data Warehousing with a strong understanding of best practices. Hands-on experience with Snowflake (Data Modeling, Query Optimization). Proficiency in Azure Data Factory (ADF) for data pipeline development. Strong working knowledge of dbt (Data Build Tool) for data transformations. (Optional) Experience in Python scripting for automation and data manipulation. Good understanding of SQL and query optimization techniques. Experience in cloud-based data solutions (Azure). Strong problem-solving skills and ability to work in a fast-paced environment. Experience with CI/CD pipelines for data engineering. Why Join Us Opportunity to work on cutting-edge data engineering projects. Work with a highly skilled and collaborative team. Exposure to modern cloud-based data solutions. ------ ------Developer / Software Engineer - One to Three Years,Snowflake - One to Three Years------PSP Defined SCU in Solution Architect

Posted 2 weeks ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Data Analyst Location: Bangalore Experience: 8 - 15 Yrs Type: Full-time Role Overview We are seeking a skilled Data Analyst to support our platform powering operational intelligence across airports and similar sectors. The ideal candidate will have experience working with time-series datasets and operational information to uncover trends, anomalies, and actionable insights. This role will work closely with data engineers, ML teams, and domain experts to turn raw data into meaningful intelligence for business and operations stakeholders. Key Responsibilities Analyze time-series and sensor data from various sources Develop and maintain dashboards, reports, and visualizations to communicate key metrics and trends. Correlate data from multiple systems (vision, weather, flight schedules, etc) to provide holistic insights. Collaborate with AI/ML teams to support model validation and interpret AI-driven alerts (e.g., anomalies, intrusion detection). Prepare and clean datasets for analysis and modeling; ensure data quality and consistency. Work with stakeholders to understand reporting needs and deliver business-oriented outputs. Qualifications & Required Skills Bachelors or Masters degree in Data Science, Statistics, Computer Science, Engineering, or a related field. 5+ years of experience in a data analyst role, ideally in a technical/industrial domain. Strong SQL skills and proficiency with BI/reporting tools (e.g., Power BI, Tableau, Grafana). Hands-on experience analyzing structured and semi-structured data (JSON, CSV, time-series). Proficiency in Python or R for data manipulation and exploratory analysis. Understanding of time-series databases or streaming data (e.g., InfluxDB, Kafka, Kinesis). Solid grasp of statistical analysis and anomaly detection methods. Experience working with data from industrial systems or large-scale physical infrastructure. Good-to-Have Skills Domain experience in airports, smart infrastructure, transportation, or logistics. Familiarity with data platforms (Snowflake, BigQuery, Custom-built using open-source). Exposure to tools like Airflow, Jupyter Notebooks and data quality frameworks. Basic understanding of AI/ML workflows and data preparation requirements.

Posted 2 weeks ago

Apply

10.0 - 15.0 years

30 - 40 Lacs

Noida, Gurugram

Work from Office

Naukri logo

We're hiring for Snowflake Data Architect - With Leading IT Services firm for Noida & Gurgaon. Job Summary: We are seeking a Snowflake Data Architect to design, implement, and optimize scalable data solutions using Databricks and the Azure ecosystem. The ideal candidate will have deep expertise in big data architecture, data engineering, and cloud technologies , enabling them to create robust, high-performance data pipelines and analytics solutions. Key Responsibilities: Design and develop scalable, secure, and high-performance data architectures using Snowflake, Databricks, Delta Lake, and Apache Spark . Architect ETL/ELT data pipelines to process structured and unstructured data efficiently. Implement data governance, security, and compliance frameworks across cloud-based data platforms. Optimize Spark jobs for performance, cost, and reliability. Collaborate with data engineers, analysts, and business teams to understand requirements and design appropriate solutions. Develop data lakehouse architectures leveraging Databricks and ADLS Implement machine learning and AI workflows using Databricks ML and integration with ML frameworks. Define and enforce best practices for data modeling, metadata management, and data quality . Monitor and troubleshoot Databricks clusters, job failures, and performance bottlenecks . Stay updated with the latest Databricks features, Apache Spark advancements, and cloud innovations . Required Qualifications: 10+ years of experience in data architecture, data engineering, or big data platforms . Hands-on experience with Snowflake is mandatory and experience on Databricks (including Delta Lake, Unity Catalog, DBSQL) is great-to-have, as an addition. Will work in Individual Contributor role with expertise in Apache Spark for large-scale data processing. Proficiency in Python, Scala, or SQL for data transformations. Experience with Azure and their data services (e.g., Azure Data Factory, Azure Synapse, Azure, Azure SQL Server ). Knowledge of data lakehouse architectures, data warehousing and ETL processes . Strong understanding of data security, IAM, and compliance best practices . Experience with CI/CD pipelines, Infrastructure as Code (Terraform, ARM templates, CloudFormation) . Familiarity with MLflow, Feature Store, and MLOps concepts is a plus. Strong interpersonal and communication skills If interested, please share your profile at harjeet@beanhr.com

Posted 2 weeks ago

Apply

3.0 - 7.0 years

4 - 8 Lacs

Chennai

Work from Office

Naukri logo

We are building up a new group within Anthology focused on the data platform. This team s mission is to bring data together from across Anthology s extensive product lines into our cloud-based data lake. We are the analytics and data experts at Anthology. Our team enables other development teams to utilize the data lake strategically and effectively for a variety of Anthology products. We deliver products and services for analytics, data science, business intelligence, and reporting. The successful candidate will have a strong foundation in software development, scaled infrastructure, containerization, pipeline development, and configuration management as well as strong problem-solving skills, analytical thinking skills, and strong written and oral communication skills . Primary responsibilities will include: Learning quickly and developing creative solutionsthatencompassperformance, reliability, maintainability,and security Applying hands-on implementation solutions using the AWS tool suite and other components to support Anthology products that utilize an expansive data lake Working with the development manager, product manager, and engineering team on projects related to system research, product design, product development, and defect resolution Being willing to respond to the unique challenges of delivering and maintaining cloud-based software. This includes minimizing downtime, troubleshooting live production environments, and responding to client-reported issues Working with other engineering personnel to ensure consistency among products Through continued iteration on existing development processes, ensuring that we re leading by example, fixing things that aren t working, and always improving our expectations of ourselves and others Thriving in the face of difficult problems Working independently with general supervision The Candidate: Required skills/qualifications: 2-4 yearsofexperience designing and developing enterprise solutions including serverless/functionless API services Knowledge of the OOP Experience with Python, Typescript/JavaScript Experience with SQL using Snowflake, Oracle, MSSQL, PostgreSQL, or other RDBMS Data structure algorithm analysis and design skills Knowledge of distributed systems and tradeoffs in consistency, availability, and network failure tolerance Knowledge of professional engineering best practices for the full SDLC, including coding standards, code reviews, source control management, build processes, testing, and operations Knowledge of a broader set of tools in the AWS tool suite (CDK, CloudFront, CloudWatch, CodeCommit, CodeBuild, CodePipeline, Lambda, API Gateway, SNS, SQS, S3, KMS, Batch, DynamoDB, DMS), Docker Fluency in written and spoken English Preferred skills/qualifications: Experience designing, developing, and operating scalable near real-time data pipelines and stream processing Experience with designing and implementing ETL processes Experience with fact/dimensional modeling (Kimball, Inmon) Previous experience in the education industry and e-learning technologies

Posted 2 weeks ago

Apply

8.0 - 13.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

Data Engineer Location: Bangalore - Onsite Experience: 8 - 15 years Type: Full-time Role Overview We are seeking an experienced Data Engineer to build and maintain scalable, high-performance data pipelines and infrastructure for our next-generation data platform. The platform ingests and processes real-time and historical data from diverse industrial sources such as airport systems, sensors, cameras, and APIs. You will work closely with AI/ML engineers, data scientists, and DevOps to enable reliable analytics, forecasting, and anomaly detection use cases. Key Responsibilities Design and implement real-time (Kafka, Spark/Flink) and batch (Airflow, Spark) pipelines for high-throughput data ingestion, processing, and transformation. Develop data models and manage data lakes and warehouses (Delta Lake, Iceberg, etc) to support both analytical and ML workloads. Integrate data from diverse sources: IoT sensors, databases (SQL/NoSQL), REST APIs, and flat files. Ensure pipeline scalability, observability, and data quality through monitoring, alerting, validation, and lineage tracking. Collaborate with AI/ML teams to provision clean and ML-ready datasets for training and inference. Deploy, optimize, and manage pipelines and data infrastructure across on-premise and hybrid environments. Participate in architectural decisions to ensure resilient, cost-effective, and secure data flows. Contribute to infrastructure-as-code and automation for data deployment using Terraform, Ansible, or similar tools. Qualifications & Required Skills Bachelors or Master’s in Computer Science, Engineering, or related field. 6+ years in data engineering roles, with at least 2 years handling real-time or streaming pipelines. Strong programming skills in Python/Java and SQL. Experience with Apache Kafka, Apache Spark, or Apache Flink for real-time and batch processing. Hands-on with Airflow, dbt, or other orchestration tools. Familiarity with data modeling (OLAP/OLTP), schema evolution, and format handling (Parquet, Avro, ORC). Experience with hybrid/on-prem and cloud platforms (AWS/GCP/Azure) deployments. Proficient in working with data lakes/warehouses like Snowflake, BigQuery, Redshift, or Delta Lake. Knowledge of DevOps practices, Docker/Kubernetes, Terraform or Ansible. Exposure to data observability, data cataloging, and quality tools (e.g., Great Expectations, OpenMetadata). Good-to-Have Experience with time-series databases (e.g., InfluxDB, TimescaleDB) and sensor data. Prior experience in domains such as aviation, manufacturing, or logistics is a plus. Role & responsibilities Preferred candidate profile

Posted 2 weeks ago

Apply

10.0 - 14.0 years

35 - 45 Lacs

Hyderabad

Work from Office

Naukri logo

About the Team At DAZN, the Analytics Engineering team is at the heart of turning hundreds of data points into meaningful insights that power strategic decisions across the business. From content strategy to product engagement, marketing optimization to revenue intelligence we enable scalable, accurate, and accessible data for every team. The Role We’re looking for a Lead Analytics Engineer to take ownership of our analytics data Pipeline and play a pivotal role in designing and scaling our modern data stack. This is a hands-on technical leadership role where you'll shape the data models in dbt/ Snowflake , orchestrate pipelines using Airflow , and enable high-quality, trusted data for reporting. Key Responsibilities Lead the development and governance of DAZN’s semantic data models to support consistent, reusable reporting metrics. Architect efficient, scalable data transformations on Snowflake using SQL/DBT and best practices in data warehousing. Manage and enhance pipeline orchestration with Airflow , ensuring timely and reliable data delivery. Collaborate with stakeholders across Product, Finance, Marketing, and Technology to translate requirements into robust data models. Define and drive best practices in version control, testing, CI/CD for analytics workflows. Mentor and support junior engineers, fostering a culture of technical excellence and continuous improvement. Champion data quality, documentation, and observability across the analytics layer. You’ll Need to Have 10+ years of experience in data/analytics engineering, with 2+ years leading or mentoring engineers . Deep expertise in SQL and cloud data warehouses (preferably Snowflake ) and Cloud Services(AWS /GCP/AZURE) Proven experience with dbt for data modeling and transformation. Hands-on experience with Airflow (or similar orchestrators like Prefect, Luigi). Strong understanding of dimensional modeling, ELT best practices, and data governance principles. Ability to balance hands-on development with leadership and stakeholder management. Clear communication skills — you can explain technical concepts to both technical and non-technical teams. Nice to Have Experience in the media, OTT, or sports tech domain. Familiarity with BI tools like Looker or PowerBI. Exposure to testing frameworks like dbt tests or Great Expectations

Posted 2 weeks ago

Apply

6.0 - 11.0 years

30 - 35 Lacs

Hyderabad, Pune, Delhi / NCR

Hybrid

Naukri logo

Collaborate with business stakeholders to gather and validate requirements Create and manage Jira tickets Support sprint planning, backlog grooming Create clear, structured requirements documentation and user stories Required Candidate profile Experience in analytics, business intelligence, or data warehouse projects (Snowflake, Power BI, Streamlit Working knowledge of Jira knowledge in Alternative Asset Management or Investment Banking.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

13 - 18 Lacs

Ahmedabad

Work from Office

Naukri logo

About the Company e.l.f. Beauty, Inc. stands with every eye, lip, face and paw. Our deep commitment to clean, cruelty free beauty at an incredible value has fueled the success of our flagship brand e.l.f. Cosmetics since 2004 and driven our portfolio expansion. Today, our multi-brand portfolio includes e.l.f. Cosmetics, e.l.f. SKIN, pioneering clean beauty brand Well People, Keys Soulcare, a groundbreaking lifestyle beauty brand created with Alicia Keys and Naturium, high-performance, biocompatible, clinically-effective and accessible skincare. In our Fiscal year 24, we had net sales of $1 Billion and our business performance has been nothing short of extraordinary with 24 consecutive quarters of net sales growth. We are the #2 mass cosmetics brand in the US and are the fastest growing mass cosmetics brand among the top 5. Our total compensation philosophy offers every full-time new hire competitive pay and benefits, bonus eligibility (200% of target over the last four fiscal years), equity, flexible time off, year-round half-day Fridays, and a hybrid 3 day in office, 2 day at home work environment. We believe the combination of our unique culture, total compensation, workplace flexibility and care for the team is unmatched across not just beauty but any industry. Visit our Career Page to learn more about our team: https://www.elfbeauty.com/work-with-us Job Summary: We’re looking for a strategic and technically strong Senior Data Architect to join our high-growth digital team. The selected person will play a critical role in shaping the company’s global data architecture and vision. The ideal candidate will lead enterprise-level architecture initiatives, collaborate with engineering and business teams, and guide a growing team of engineers and QA professionals. This role involves deep engagement across domains including Marketing, Product, Finance, and Supply Chain, with a special focus on marketing technology and commercial analytics relevant to the CPG/FMCG industry. The candidate should bring a hands-on mindset, a proven track record in designing scalable data platforms, and the ability to lead through influence. An understanding of industry-standard frameworks (e.g., TOGAF), tools like CDPs, MMM platforms, and AI-based insights generation will be a strong plus. Curiosity, communication, and architectural leadership are essential to succeed in this role. Key Responsibilities Enterprise Data Strategy: Design, define and maintain a holistic data strategy & roadmap that aligns with corporate objectives and fuels digital transformation. Ensure data architecture and products aligns with enterprise standards and best practices. Data Governance & Quality: Establish scalable governance frameworks to ensure data accuracy, privacy, security, and compliance (e.g., GDPR, CCPA). Oversee quality, security and compliance initiatives Data Architecture & Platforms: Oversee modern data infrastructure (e.g., data lakes, warehouses, streaming) with technologies like Snowflake, Databricks, AWS, and Kafka. Marketing Technology Integration: Ensure data architecture supports marketing technologies and commercial analytics platforms (e.g., CDP, MMM, ProfitSphere) tailored to the CPG/FMCG industry. Architectural Leadership: Act as a hands-on architect with the ability to lead through influence. Guide design decisions aligned with industry best practices and e.l.f.'s evolving architecture roadmap. Cross-Functional Collaboration: Partner with Marketing, Supply Chain, Finance, R&D, and IT to embed data-driven practices and deliver business impact. Lead integration of data from multiple sources to unified data warehouse. Cloud Optimization : Optimize data flows, storage for performance and scalability. Lead data migration priorities, manage metadata repositories and data dictionaries. Optimise databases and pipelines for efficiency. Manage and track quality, cataloging and observability AI/ML Enablement: Drive initiatives to operationalize predictive analytics, personalization, demand forecasting, and more using AI/ML models. Evaluate emerging data technologies and tools to improve data architecture. Team Leadership: Lead, mentor, and enable high-performing team of data engineers, analysts, and partners through influence and thought leadership. Vendor & Tooling Strategy: Manage relationships with external partners and drive evaluations of data and analytics tools. Executive Reporting: Provide regular updates and strategic recommendations to executive leadership and key stakeholders. Data Enablement : Design data models, database structures, and data integration solutions to support large volumes of data. Qualifications and Requirements Bachelor's or Master's degree in Computer Science, Information Systems, or a related field 18+ years of experience in Information Technology 8+ years of experience in data architecture, data engineering, or a related field, with a focus on large-scale, distributed systems. Strong understanding of data use cases in the CPG/FMCG sector. Experience with tools such as MMM (Marketing Mix Modeling), CDPs, ProfitSphere, or inventory analytics preferred. Awareness of architecture frameworks like TOGAF. Certifications are not mandatory, but candidates must demonstrate clear thinking and experience in applying architecture principles. Must possess excellent communication skills and a proven ability to work cross-functionally across global teams. Should be capable of leading with influence, not just execution. Knowledge of data warehousing, ETL/ELT processes, and data modeling Deep understanding of data modeling principles, including schema design and dimensional data modeling. Strong SQL development experience including SQL Queries and stored procedures Ability to architect and develop scalable data solutions, staying ahead of industry trends and integrating best practices in data engineering. Familiarity with data security and governance best practices Experience with cloud computing platforms such as Snowflake, AWS, Azure, or GCP Excellent problem-solving abilities with a focus on data analysis and interpretation. Strong communication and collaboration skills. Ability to translate complex technical concepts into actionable business strategies. Proficiency in one or more programming languages such as Python, Java, or Scala This job description is intended to describe the general nature and level of work being performed in this position. It also reflects the general details considered necessary to describe the principal functions of the job identified, and shall not be considered, as detailed description of all the work required inherent in the job. It is not an exhaustive list of responsibilities, and it is subject to changes and exceptions at the supervisors’ discretion. e.l.f. Beauty respects your privacy. Please see our Job Applicant Privacy Notice (www.elfbeauty.com/us-job-applicant-privacy-notice) for how your personal information is used and shared.

Posted 2 weeks ago

Apply

4.0 - 7.0 years

8 - 18 Lacs

Pune

Work from Office

Naukri logo

Job Title: MongoDB Developer Type: Development role with focus on data modeling and ingestion Must-Have Responsibilities: Design, develop, and manage scalable database solutions using MongoDB Write robust, effective, and scalable queries and operations for MongoDB-based applications Integrate third-party services, tools, and APIs with MongoDB for data management and processing Collaborate with developers, data engineers, and stakeholders to ensure seamless MongoDB integration with applications and systems Perform unit, integration, and performance testing to validate stability and functionality of MongoDB implementations Conduct code and database reviews , ensuring adherence to security, scalability , and MongoDB best practices Preferred / Secondary Skills: Experience with data modeling (considered a secondary skill) Exposure to Snowflake (preferred but not mandatory)

Posted 2 weeks ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Hyderabad

Remote

Naukri logo

Position: Lead Data Engineer Experience: 7+ Years Location: Hyderabad | Chennai | Remote Summary We are seeking a Lead Data Engineer with 7+ years of experience to lead the development of ETL pipelines, data warehouse solutions, and analytics infrastructure. The ideal candidate will have strong experience in Snowflake , Azure Data Factory , dbt , and Fivetran , with a background in managing data for analytics and reporting, particularly within the healthcare domain. Responsibilities Design and develop ETL pipelines using Fivetran , dbt , and Azure Data Factory for internal and client projects involving platforms such as Azure , Salesforce , and AWS . Monitor and manage production ETL workflows and resolve operational issues proactively. Document data lineage and maintain architecture artifacts for both existing and new systems. Collaborate with QA and UAT teams to produce clear, testable mapping and design documentation. Assess and recommend data integration tools and transformation approaches. Identify opportunities for process optimization and deduplication in data workflows. Implement data quality checks in collaboration with Data Quality Analysts. Contribute to the design and development of large-scale Data Warehouses , MDM solutions , Data Lakes , and Data Vaults . Required Skills & Qualifications Bachelor's Degree in Computer Science, Software Engineering, Mathematics, or a related field. 6+ years of experience in data engineering, software development, or business analytics. 5+ years of strong hands-on SQL development experience. Proven expertise in: Snowflake Azure Data Factory (ADF) ETL tools such as Informatica , Talend , dbt , or similar. Experience in the healthcare industry , with understanding of PHI/PII requirements. Strong analytical and critical thinking skills. Excellent communication and interpersonal abilities. Proficient in scripting or programming languages such as Python , Perl , Java , or Shell scripting on Linux/Unix environments. Familiarity with BI/reporting tools like Power BI , Tableau , or Cognos . Experience with Big Data technologies such as: Snowflake (Snowpark) Apache Spark , Hadoop , MapReduce , Sqoop , Hive , Pig , HBase , Flume .

Posted 2 weeks ago

Apply

10.0 - 15.0 years

25 - 40 Lacs

Mumbai

Work from Office

Naukri logo

Overview of the Company: Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview: The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location: Mumbai Responsibilities: End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details: Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes: Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools.

Posted 2 weeks ago

Apply

10.0 - 14.0 years

15 - 20 Lacs

Indore, Hyderabad, Ahmedabad

Work from Office

Naukri logo

Technical Architect / Solution Architect / Data Architect (Data Analytics) Notice Period: Immediate to 15 Days Job Summary: We are looking for a highly technical and experienced Data Architect / Solution Architect / Technical Architect with expertise in Data Analytics. The candidate should have strong hands-on experience in solutioning, architecture, and cloud technologies to drive data-driven decisions. Key Responsibilities: Design, develop, and implement end-to-end data architecture solutions. Provide technical leadership in Azure, Databricks, Snowflake, and Microsoft Fabric. Architect scalable, secure, and high-performing data solutions. Work on data strategy, governance, and optimization. Implement and optimize Power BI dashboards and SQL-based analytics. Collaborate with cross-functional teams to deliver robust data solutions. Primary Skills Required: Data Architecture & Solutioning Azure Cloud (Data Services, Storage, Synapse, etc.) Databricks & Snowflake (Data Engineering & Warehousing) Power BI (Visualization & Reporting) Microsoft Fabric (Data & AI Integration) SQL (Advanced Querying & Optimization) Contact: 9032956160 Looking for immediate to 15-day joiners

Posted 2 weeks ago

Apply

8.0 - 12.0 years

15 - 20 Lacs

Chennai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Role overview As a core member of the Data Science team, the Senior Data Scientist will be responsible for implementing and training machine learning models, developing and maintaining data pipelines to supply training and inference data for models, evaluating performance of production-deployed models, and supporting A/B testing of the Data Science teams models. Potential areas of support and ownership include Recommendations, Search Ranking, and Instant Market Value algorithms. What you'll do Implementing, training, and evaluating machine learning models using Python and AWS SageMaker. Developing and maintaining data pipelines to supply training and inference data for models, using SQL and Snowflake. Collaborating with engineers to deploy models to production Evaluating performance of production-deployed models Designing A/B tests of the Data Science teams models and analyzing their results Communicating solutions to stakeholders through written documentation, demos and presentations, and data visualizations Collaborating with other data scientists, machine learning engineers, and business stakeholders to scope, design, and implement machine learning projects What you'll bring Curiosity about widely varied datasets and their possibilities for unlocking customer value. Self-motivated to perform exploratory analyses and build proof-of-concept solutions. Proven experience turning data into successful products Knowledge of standard Machine Learning techniques for supervised and unsupervised learning across structured and unstructured datasets. Comprehensive knowledge of, and real-world experience with, measurement, evaluation, and testing of models. Experience deploying and/or maintaining machine learning services in production Proficiency in Python or similar languages widely used in the data science community Proficiency in SQL Ability to communicate technical details and analytical findings to both technical and non-technical audiences Advanced degree (or proven experience) in Computer Science, Data Science, Mathematics, or any quantitative science which makes use of advanced data analytics or statistical or machine learning techniques Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 2 weeks ago

Apply

0.0 - 2.0 years

3 - 6 Lacs

Noida

Work from Office

Naukri logo

Required Skills: Absolute clarity in OOP fundamentals and Data-Structures Must have hands-on experience in Data Structure like List, Dict, Set, Strings, Lambda, etc Must have hands-on experience in working with Spark, Hadoop Excellent written and verbal communication and presentation skills Roles and responsibilities: Maintain and improve existing projects Collaborate with the technical team to develop new features and troubleshoot issues Lead projects to understand the requirements and distribute work to the technical team Follow the project/task timelines and quality.

Posted 2 weeks ago

Apply

9.0 - 12.0 years

15 - 20 Lacs

Hyderabad, Bengaluru

Work from Office

Naukri logo

Experience: 9 12 Years Location: Bangalore / Hyderabad Notice Period: Immediate to 15 Days Overview We are looking for a highly experienced and strategic Snowflake Data Architect to lead the transformation and modernization of our data architecture. You will be responsible for designing scalable, high-performance data solutions and ensuring seamless data quality and integration across the organization. This role requires close collaboration with data modelers, business stakeholders, governance teams, and engineers to develop robust and efficient data architectures. This is an excellent opportunity to join a dynamic, innovation-driven environment with significant room for professional growth. We encourage initiative, creative problem-solving, and a proactive approach to optimizing our data ecosystem. Responsibilities Architect, design, and implement scalable data solutions using Snowflake . Build and maintain efficient data pipelines using SQL and ETL tools to integrate data from multiple ERP and other source systems. Leverage data mappings, modeling (2NF/3NF) , and best practices to ensure consistent and accurate data structures. Collaborate with stakeholders to gather requirements and design data models that support business needs. Optimize and debug complex SQL queries and ensure performance tuning of pipelines. Create secure, reusable, and maintainable components for data ingestion and transformation workflows. Implement and maintain data quality frameworks , ensuring adherence to governance standards. Lead User Acceptance Testing (UAT) support, production deployment activities, and manage change requests. Produce comprehensive technical documentation for future reference and auditing purposes. Provide technical leadership in the use of cloud platforms (Snowflake, AWS) and support teams through knowledge transfer. Requirements Bachelor’s degree in Computer Science, Information Technology, or a related field. 9 to 12 years of overall experience in data engineering and architecture roles. Strong, hands-on expertise in Snowflake with a solid understanding of its advanced features. Proficient in advanced SQL with extensive experience in data transformation and pipeline optimization . Deep understanding of data modeling techniques , especially 2NF/3NF normalization . Experience with cloud-native platforms , especially AWS (S3, Glue, Lambda, Step Functions) is highly desirable. Knowledge of ETL tools (Informatica, Talend, etc.) and working in agile environments . Familiarity with structured deployment workflows (e.g., Carrier CAB process). Strong debugging, troubleshooting , and analytical skills . Excellent communication and stakeholder management skills. Key Skills Snowflake (Advanced) SQL (Expert) Data Modeling (2NF/3NF) ETL Tools AWS (S3, Glue, Lambda, Step Functions) Agile Development Data Quality & Governance Performance Optimization Technical Documentation Stakeholder Collaboration

Posted 2 weeks ago

Apply

10.0 - 18.0 years

22 - 27 Lacs

Hyderabad

Remote

Naukri logo

Role: Data Architect / Data Modeler - ETL, Snowflake, DBT Location: Remote Duration: 14+ Months Timings: 5:30pm IST to 1:30am IST Note: Looking for Immediate Joiners Job Summary: We are seeking a seasoned Data Architect / Modeler with deep expertise in Snowflake , DBT , and modern data architectures including Data Lake , Lakehouse , and Databricks platforms. The ideal candidate will be responsible for designing scalable, performant, and reliable data models and architectures that support analytics, reporting, and machine learning needs across the organization. Key Responsibilities: Architect and design data solutions using Snowflake , Databricks , and cloud-native lakehouse principles . Lead the implementation of data modeling best practices (star/snowflake schemas, dimensional models) using DBT . Build and maintain robust ETL/ELT pipelines supporting both batch and real-time data processing. Develop data governance and metadata management strategies to ensure high data quality and compliance. Define data architecture frameworks, standards, and principles for enterprise-wide adoption. Work closely with business stakeholders, data engineers, analysts, and platform teams to translate business needs into scalable data solutions. Provide guidance on data lake and data warehouse integration , helping bridge structured and unstructured data needs. Establish data lineage, documentation, and maintain architecture diagrams and data dictionaries. Stay up to date with industry trends and emerging technologies in cloud data platforms and recommend improvements. Required Skills & Qualifications: 10+ years of experience in data architecture, data engineering, or data modeling roles. Strong experience with Snowflake including performance tuning, security, and architecture. Hands-on experience with DBT (Data Build Tool) for building and maintaining data transformation workflows. Deep understanding of Lakehouse Architecture , Data Lake implementations, and Databricks . Solid grasp of dimensional modeling , normalization/denormalization strategies, and data warehouse design principles. Experience with cloud platforms (e.g., AWS, Azure, or GCP) Proficiency in SQL and scripting languages (e.g., Python). Familiarity with data governance frameworks , data catalogs, and metadata management tools.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

12 - 22 Lacs

Hyderabad, Chennai

Work from Office

Naukri logo

Interested can also apply with Sanjeevan Natarajan sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma

Posted 2 weeks ago

Apply

8.0 - 13.0 years

18 - 22 Lacs

Hyderabad

Remote

Naukri logo

Roles: SQL Data Engineer - ETL, DBT & Snowflake Specialist Location: Remote Duration: 14+ Months Timings: 5:30pm IST 1:30am IST Note: Immediate Joiners Only Required Experience: Advanced SQL Proficiency Writing and optimizing complex queries, stored procedures, functions, and views. Experience with query performance tuning and database optimization. ETL/ELT Development Building, and maintaining ETL/ELT pipelines. Familiarity with ETL tools or processes and orchestration frameworks. Data Modeling Designing and implementing data models Understanding of dimensional modeling and normalization. Snowflake Expertise Hands-on experience with Snowflakes architecture and features Experience with Snowflake database, schema, procedures, functions. DBT (Data Build Tool) Building data models, transformations using DBT. Implementing DBT best practices including testing, documentation, and CI/CD integration. Programming and Automation Proficiency in Python is a plus. Experience with version control systems (e.g., Git, Azure DevOps). Experience with Agile methodologies and DevOps practices. Collaboration and Communication Working effectively with data analysts, and business stakeholders. Translating technical concepts into clear, actionable insights. Prior experience in a fast-paced, data-driven environment.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

30 - 35 Lacs

Bengaluru, Delhi / NCR, Mumbai (All Areas)

Hybrid

Naukri logo

Create documentation and user stories. Work with engineering teams to review upcoming and backlog Jira tickets. Provide guidance on design decisions in areas including Credit and tech including Snowflake and Streamlit Develop reporting in powerBI Required Candidate profile 5+ years of experience as a Business analyst especially in Alternative assets, Credit, CLO, Real Estate etc. Experience creating complex dashboards in powerBI Exposure to Snowflake and Streamlit

Posted 2 weeks ago

Apply

Exploring Snowflake Jobs in India

Snowflake has become one of the most sought-after skills in the tech industry, with a growing demand for professionals who are proficient in handling data warehousing and analytics using this cloud-based platform. In India, the job market for Snowflake roles is flourishing, offering numerous opportunities for job seekers with the right skill set.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Chennai

These cities are known for their thriving tech industries and have a high demand for Snowflake professionals.

Average Salary Range

The average salary range for Snowflake professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Career Path

A typical career path in Snowflake may include roles such as: - Junior Snowflake Developer - Snowflake Developer - Senior Snowflake Developer - Snowflake Architect - Snowflake Consultant - Snowflake Administrator

Related Skills

In addition to expertise in Snowflake, professionals in this field are often expected to have knowledge in: - SQL - Data warehousing concepts - ETL tools - Cloud platforms (AWS, Azure, GCP) - Database management

Interview Questions

  • What is Snowflake and how does it differ from traditional data warehousing solutions? (basic)
  • Explain how Snowflake handles data storage and compute resources in the cloud. (medium)
  • How do you optimize query performance in Snowflake? (medium)
  • Can you explain how data sharing works in Snowflake? (medium)
  • What are the different stages in the Snowflake architecture? (advanced)
  • How do you handle data encryption in Snowflake? (medium)
  • Describe a challenging project you worked on using Snowflake and how you overcame obstacles. (advanced)
  • How does Snowflake ensure data security and compliance? (medium)
  • What are the benefits of using Snowflake over traditional data warehouses? (basic)
  • Explain the concept of virtual warehouses in Snowflake. (medium)
  • How do you monitor and troubleshoot performance issues in Snowflake? (medium)
  • Can you discuss your experience with Snowflake's semi-structured data handling capabilities? (advanced)
  • What are Snowflake's data loading options and best practices? (medium)
  • How do you manage access control and permissions in Snowflake? (medium)
  • Describe a scenario where you had to optimize a Snowflake data pipeline for efficiency. (advanced)
  • How do you handle versioning and change management in Snowflake? (medium)
  • What are the limitations of Snowflake and how would you work around them? (advanced)
  • Explain how Snowflake supports semi-structured data formats like JSON and XML. (medium)
  • What are the considerations for scaling Snowflake for large datasets and high concurrency? (advanced)
  • How do you approach data modeling in Snowflake compared to traditional databases? (medium)
  • Discuss your experience with Snowflake's time travel and data retention features. (medium)
  • How would you migrate an on-premise data warehouse to Snowflake in a production environment? (advanced)
  • What are the best practices for data governance and metadata management in Snowflake? (medium)
  • How do you ensure data quality and integrity in Snowflake pipelines? (medium)

Closing Remark

As you explore opportunities in the Snowflake job market in India, remember to showcase your expertise in handling data analytics and warehousing using this powerful platform. Prepare thoroughly for interviews, demonstrate your skills confidently, and keep abreast of the latest developments in Snowflake to stay competitive in the tech industry. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies