Home
Jobs

1714 Snowflake Jobs - Page 8

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

11.0 - 14.0 years

16 - 27 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Roles and Responsibilities Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Snowflake Data Warehouse. Develop complex SQL queries to optimize database performance and troubleshoot issues in Snowflake tables. Collaborate with cross-functional teams to gather requirements for reporting needs and design scalable solutions using Power BI. Ensure high-quality data modeling by creating logical and physical models for large datasets. Troubleshoot technical issues related to ETL processes, data quality, and performance tuning.

Posted 1 week ago

Apply

8.0 - 13.0 years

17 - 25 Lacs

Bangalore Rural, Bengaluru

Work from Office

Naukri logo

Call: 7738402343 Mail: divyani@contactxindia.com Role & responsibilities Snowflake with Python, Dayshift More than 8 years of IT Experience, specifically in data Engineering stream Should possess Developmental skills in Snowflake, basic IBM Datastage/ any other ETL tool, SQL (expert ), basics of Python/Pyspark, AWS along with high proficiency in Oracle SQL Hands on Experience in handling databases along with experience in any scheduling tool like Ctrl-M, Control - M Excellent customer service, interpersonal, communication and team collaboration skills Excellent in debugging skills in Databases and should have played a key member role in earlier projects. Excellent in SQL and PL/SQL coding (development. Ability to identify and implement process and/or application improvements Must be able to work on multiple simultaneous tasks with limited supervision Able to follow change management procedures and internal guidelines Any relevant technical Certifications in data stage is a plus Preferred candidate profile

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Senior Data Analyst - Procurement Bengaluru, India We are seeking a skilled and highly motivated Senior Data Analyst to join our Procurement team. In this role, you will analyze various data management tools, optimize software license usage, and provide key insights to stakeholders regarding software license performance and cost-effectiveness. The ideal candidate will possess a strong technical background and a keen analytical mindset, with a focus on improving the overall reporting landscape and help drive efficiency. Candidates must be available to work and attend meetings during Pacific Standard Time (PST). Key Responsibilities: This position is responsible for data analysis and reporting needs for the Procurement team (SS&P) in support of Business Services. The role covers data needs related, but not limited to, Snowflake, Coupa, ServiceNow, and financial metrics supporting strategic priorities and participates in a variety of special projects involving data analysis, business operations, and data management. Managing Procurement reports (e.g., Excel, Tableau, Power BI) with precision and effectiveness based on data coming from a variety of business systems and data warehouses (Coupa, Snowflake, SCOUT, ServiceNow) Providing first line resolution and support for data requests as well as partnering with IT and Sec-Ops team accordingly Growing the analytics and Business Intelligence related to software contracts, metrics, cost, budget supported by Procurement and Sourcing to assist in building a data-driven culture: Software Asset Inventory: Maintain an accurate and up-to-date inventory of software licenses, versions, and deployments throughout the organisation. License Compliance: Ensure adherence to software licensing agreements, monitor license usage, and take necessary measures to address any non-compliance issues. License Optimization: Analyse software usage patterns, identify opportunities for optimization, and implement strategies to optimize license allocation and reduce costs (ex: co-terming agreements). Cost Optimization: Focus on software license cost management, tracking, and forecasting. Collaboration: Work with cross-functional teams (e.g. IT, Security, Bu) to improve data quality and reporting. Renewals: Provide up-to-date and reliable information to ensure software contracts are renewed on time. Design BI dashboards, scorecards, charts/graphs, drill-downs, and dynamic reports to meet new information needs. Lead the creation of a catalog of Key Performance Indicators and the documentation of their supporting business requirements, data models, calculation rules, and metadata. Qualifications: Bachelor s degree in Computer Science, Information Systems Management, or a related field, or an equivalent combination of education and/or experience. Advanced Excel skills required; ServiceNow or BI tool certifications are a plus. Experience: 5+ years of experience as a Data Analyst, with hands-on experience in data analysis and creating visualizations using BI tools. Expertise with SQL, Snowflake, Power BI, and Tableau. Strong understanding of Procurement/Sourcing processes, especially in SaaS and Cloud Services. Strong knowledge of ServiceNow ITAM and/or ITSM. Experience with cloud-based analytics or data governance. Proficient in PowerPoint with the ability to influence decision-making using data-driven insights. Excellent problem-solving and data analysis skills, with a focus on cloud performance metrics and cost optimization. Strong written and verbal communication skills, with the ability to explain complex technical concepts to both technical and non-technical stakeholders. Ability to manage multiple priorities and projects in a fast-paced environment. Excited to work in a fast-growing global environment and able to thrive with both autonomy and collaboration.

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Senior Data Engineer - Enterprise Data Platform Get to know Data Engineering Okta s Business Operations team is on a mission to accelerate Okta s scale and growth. We bring world-class business acumen and technology expertise to every interaction. We also drive cross-functional collaboration and are focused on delivering measurable business outcomes. Business Operations strives to deliver amazing technology experiences for our employees, and ensure that our offices have all the technology that is needed for the future of work. The Data Engineering team is focused on building platforms and capabilities that are utilized across the organization by sales, marketing, engineering, finance, product, and operations. The ideal candidate will have a strong engineering background with the ability to tie engineering initiatives to business impact. You will be part of a team doing detailed technical designs, development, and implementation of applications using cutting-edge technology stacks. The Senior Data Engineer Opportunity A Senior Data Engineer is responsible for designing, building, and maintaining scalable solutions. This role involves collaborating with data engineers, analysts, scientists and other engineers to ensure data availability, integrity, and security. The ideal candidate will have a strong background in cloud platforms, data warehousing, infrastructure as code, and continuous integration/continuous deployment (CI/CD) practices. What you ll be doing: Design, develop, and maintain scalable data platforms using AWS, Snowflake, dbt, and Databricks. Use Terraform to manage infrastructure as code, ensuring consistent and reproducible environments. Develop and maintain CI/CD pipelines for data platform applications using GitHub and GitLab. Troubleshoot and resolve issues related to data infrastructure and workflows. Containerize applications and services using Docker to ensure portability and scalability. Conduct vulnerability scans and apply necessary patches to ensure the security and integrity of the data platform. Work with data engineers to design and implement Secure Development Lifecycle practices and security tooling (DAST, SAST, SCA, Secret Scanning) into automated CI/CD pipelines. Ensure data security and compliance with industry standards and regulations. Stay updated with the latest trends and technologies in data engineering and cloud platforms. What we are looking for: BS in Computer Science, Engineering or another quantitative field of study 5+ years in a data engineering role 5+ years experience working with SQL, ETL tools such as Airflow and dbt, with relational and columnar MPP databases like Snowflake or Redshift, hands-on experience with AWS (e.g., S3, Lambda, EMR, EC2, EKS) 2+ years of experience managing CI/CD infrastructures, with strong proficiency in tools like GitHub Actions, Jenkins, ArgoCD, GitLab, or any CI/CD tool to streamline deployment pipelines and ensure efficient software delivery. 2+ years of experience with Java, Python, Go, or similar backend languages. Experience with Terraform for infrastructure as code. Experience with Docker and containerization technologies. Experience working with lakehouse architectures such as Databricks and file formats like Iceberg and Delta Experience in designing, building, and managing complex deployment pipelines.

Posted 1 week ago

Apply

3.0 - 8.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

As a Senior People Data Ops Product Manager, you will own enterprise data products and assets, such as curated data sets, semantic layers and foundational data pipelines of HR pyramid. You will drive product strategy, discovery, delivery, and evolution of these data products to ensure they meet the analytical, operational, and compliance needs of Target s diverse user base. About the Role As a Senior People Data Ops Product Manager, you will work in Target s product model and partner closely with engineers, data scientists, UX designers, governance and privacy experts, and business stakeholders to build and scale data products that deliver measurable outcomes. You will be accountable for understanding customer needs and business objectives, and translating them into a clear roadmap of capabilities that drive adoption and impact. You will: Define the vision, strategy, and roadmap for one or more data products, aligning with enterprise data and business priorities. Deeply understand your users analysts, data scientists, engineers, and business leaders and their data needs. Translate complex requirements into clear user stories, acceptance criteria, and product specifications. Drive decisions about data sourcing, quality, access, and governance in partnership with engineering, privacy, and legal teams. Prioritize work in a unified backlog across discovery, design, data modeling, engineering, and testing. Ensure high-quality, reliable, and trusted data is accessible and usable for a variety of analytical and operational use cases. Evangelize the value of your data product across the enterprise and support enablement and adoption efforts. Use data to make decisions about your product's performance, identify improvements, and evaluate new opportunities. About You Must have minimum 3 years of college degree in computer science or information technology. A total of 9+ years of experience in which 5+ years of product management experience, ideally with a focus on data products, platforms, or analytical tooling Deep understanding of data conceptsdata modeling, governance, quality, privacy, and lifecycle management Experience delivering products in agile environments (e.g., user stories, iterative development, scrum teams) Ability to translate business needs into technical requirements and communicate effectively across roles Demonstrated success in building products that support data consumers like analysts, engineers, and business users Experience working with modern data technologies (e.g., Snowflake, Hadoop, Airflow, GCP, etc.) is a plus Strategic thinker with strong analytical and problem-solving skills Strong leadership, collaboration, and communication skills Willing to coach and mentor team members.

Posted 1 week ago

Apply

2.0 - 6.0 years

3 - 8 Lacs

Pune, Sangli

Work from Office

Naukri logo

We are looking for a Data Science Engineer with strong experience in ETL development and Talend to join our data and analytics team. The ideal candidate will be responsible for designing robust data pipelines, enabling analytics and AI solutions, and working on scalable data science projects that drive business value. Key Responsibilities: Design, build, and maintain ETL pipelines using Talend Data Integration . Extract data from multiple sources (databases, APIs, flat files) and load it into data warehouses or lakes. Ensure data integrity , quality , and performance tuning in ETL workflows. Implement job scheduling, logging, and exception handling using Talend and orchestration tools. Prepare and transform large datasets for analytics and machine learning use cases. Build and deploy data pipelines that feed predictive models and business intelligence platforms. Collaborate with data scientists to operationalize ML models and ensure they run efficiently at scale. Assist in feature engineering , data labeling , and model monitoring processes. Required Skills & Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Data Engineering, or a related field. 3+ years of experience in ETL development , with at least 2 years using Talend . Proficiency in SQL , Python (for data transformation or automation) Hands-on experience with data integration , data modeling , and data warehousing . Must have Strong Knowledge of cloud platforms such as AWS , Azure , or Google Cloud . Familiarity with big data tools like Spark, Hadoop, or Kafka is a plus.

Posted 1 week ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Gurugram

Work from Office

Naukri logo

About the Opportunity Job TypeApplication 23 June 2025 Title Expert Engineer Department GPS Technology Location Gurugram, India Reports To Project Manager Level Grade 4 Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved thisBy working together - and supporting each other - all over the world. So, join our [insert name of team/ business area] team and feel like your part of something bigger. About your team The Technology function provides IT services to the Fidelity International business, globally. These include the development and support of business applications that underpin our revenue, operational, compliance, finance, legal, customer service and marketing functions. The broader technology organisation incorporates Infrastructure services that the firm relies on to operate on a day-to-day basis including data centre, networks, proximity services, security, voice, incident management and remediation. About your role Expert engineer is a seasoned technology expert who is highly skilled in programming, engineering and problem-solving skills. They can deliver value to business faster and with superlative quality. Their code and designs meet business, technical, non-functional and operational requirements most of the times without defects and incidents. So, if relentless focus and drive towards technical and engineering excellence along with adding value to business excites you, this is absolutely a role for you. If doing technical discussions and whiteboarding with peers excites you and doing pair programming and code reviews adds fuel to your tank, come we are looking for you. Understand system requirements, analyse, design, develop and test the application systems following the defined standards. The candidate is expected to display professional ethics in his/her approach to work and exhibit a high-level ownership within a demanding working environment. About you Essential Skills You have excellent software designing, programming, engineering, and problem-solving skills. Strong experience working on Data Ingestion, Transformation and Distribution using AWS or Snowflake Exposure to SnowSQL, Snowpipe, Role based access controls, ETL / ELT tools like Nifi, Matallion / DBT Hands on working knowledge around EC2, Lambda, ECS/EKS, DynamoDB, VPCs Familiar with building data pipelines that leverage the full power and best practices of Snowflake as well as how to integrate common technologies that work with Snowflake (code CICD, monitoring, orchestration, data quality, monitoring) Experience with designing, implementing, and overseeing the integration of data systems and ETL processes through Snaplogic Designing Data Ingestion and Orchestration Pipelines using AWS, Control M Establish strategies for data extraction, ingestion, transformation, automation, and consumption. Experience in Data Lake Concepts with Structured, Semi-Structured and Unstructured Data Experience in creating CI/CD Process for Snowflake Experience in strategies for Data Testing, Data Quality, Code Quality, Code Coverage Ability, willingness & openness to experiment / evaluate / adopt new technologies. Passion for technology, problem solving and team working. Go getter, ability to navigate across roles, functions, business units to collaborate, drive agreements and changes from drawing board to live systems. Lifelong learner who can bring the contemporary practices, technologies, ways of working to the organization. Effective collaborator adept at using all effective modes of communication and collaboration tools. Experience delivering on data related Non-Functional like- Hands-on experience dealing with large volumes of historical data across markets/geographies. Manipulating, processing, and extracting value from large, disconnected datasets. Building water-tight data quality gateson investment management data Generic handling of standard business scenarios in case of missing data, holidays, out of tolerance errorsetc. Experience and Qualification B.E./ B.Tech. or M.C.A. in Computer Science from a reputed University Total 7 to 10 years of relevant experience Personal Characteristics Good interpersonal and communication skills. Strong team player Ability to work at a strategic and tactical level. Ability to convey strong messages in a polite but firm manner. Self-motivation is essential, should demonstrate commitment to high quality design and development. Ability to develop & maintain working relationships with several stakeholders. Flexibility and an open attitude to change. Problem solving skills with the ability to think laterally, and to think with a medium term and long-term perspective. Ability to learn and quickly get familiar with a complex business and technology environment. Feel rewarded For starters, well offer you a comprehensive benefits package. Well value your wellbeing and support your development. And well be as flexible as we can about where and when you work finding a balance that works for all of us. Its all part of our commitment to making you feel motivated by the work you do and happy to be part of our team.

Posted 1 week ago

Apply

4.0 - 7.0 years

8 - 12 Lacs

Gurugram

Work from Office

Naukri logo

About the Opportunity Job TypeApplication 21 June 2025 Title Senior Analyst- Data Scientist Department Data Value Location Gurgaon Reports To Suman Kaur Level 3 Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved thisBy working together - and supporting each other - all over the world. So, join our Data Value team and feel like youre part of something bigger. About your team Data Value team drives the renewed focus of extracting value from Fidelitys data for business and client insights and working as one voice with the business, technology, and data teams. The teams vision is to create measurable business impact by leveraging technology and utilising the skills to generate valuable insights and streamline engagements. The Data Science function within Data Value supports Fidelity Internationals Sales, Marketing, Propositions, Risk, Finance, Customer Service and HR teams across the globe. The key objectives of the function are: To develop deep customer insights for our businesses helping them segment and target customers more effectively To develop a fact-based understanding of sales trends and identify actionable sales growth opportunities for each of our sales channels To understand customer preferences in terms of products, service attributes and marketing activity to help refine each of these To help develop new services lines e.g. develop customer analytics for key IFAs, DC Clients, Individual clients etc. To develop market and competitive intelligence in our key markets to help shape our business planning in those markets The function works directly with business heads and other senior stakeholders stakeholders to identify areas of analytics, define problem statements and develop key insights. About your role You will be expected to take a leading role in developing the Data Science and Advanced Analytics solutions for our business. This will involve: Engaging with the key stakeholders to understand Fidelitys sales, marketing, client services and propositions context Implement advanced analytics solutions on On-Premises/Cloud platforms, develop proof-of-concepts and engage with internal and external ecosystem to progress the proof of concepts to production. Engaging and collaborating with different other internal teams like Data engineering, DevOps, technology team etc for development of new tools, capabilities, and solutions. Maximize Adoption of Cloud Based advanced analytics solutionsBuild out sandbox analytics environments using Snowflake, AWS, Adobe, Salesforce. About you Key Responsibilities Developing and Delivering Data Science solutions for business (40%) Partner with internal (FIL teams) & external ecosystem to design and deliver advanced analytics enabled Data Science solutions Create advanced analytics solution on quantitative and text data using Artificial Intelligence, Machine Learning and NLP techniques. Create compelling visualisations that enable the smooth consumption of predictions and insights for customer benefit . Stakeholder Management (30%) Works with channel heads/stakeholders and other sponsors understand the business problem and translate it into appropriate analytics solution. Engages with key stakeholders for smooth execution, delivery, and implementation of solutions Adoption of Cloud enabled Data Science solutions(20%) Maximize Adoption of Cloud Based advanced analytics solution Build out sandbox analytics environments using Snowflake, AWS, Adobe, Salesforce Deploy solutions in productions while adhering to best practices involving Model Explainability, MLOps, Feature Stores, Model Management, Responsible AI etc Collaboration and Ownership (10%) Sharing of knowledge, best practices with the team including coaching or training in some of deep learning/machine learning methodologies. Provides mentoring, coaching, and consulting advice and guidance to staff, e.g. analytic methodologies, data recommendations Takes complete independent ownership of the projects and the initiatives in the team with the minimal support Experience and Qualifications Required Qualifications: Engineer from IIT/Masters in field related to Data Science/Economics/Mathematics (Tie1 Institutions like ISI, Delhi School of Economics)/M.B.A from tier 1 institutions Must have Skills & Experience Required: Overall, 8+ years of experience in Data Science and Analytics 5+ years of hands-on experience in - Statistical Modelling /Machine Learning Techniques/Natural Language Processing/Deep Learning 5+ years of experience in Python/Machine Learning/Deep Learning Excellent problem-solving skills Should be able to run analytics applications such as Python, SAS and interpret statistical results Implementation of models with clear measurable outcomes Good to have Skills & Experience Required: Ability to engage in discussion with senior stakeholders on defining business problems, designing analyses projects, and articulating analytical insights to stakeholders. Experience on SPARK/Hadoop/Big Data Platforms is a plus Experience with unstructured data and big data Experience with secondary data and knowledge of primary market research is a plus. Ability to independently own and manage the projects with minimal support. Excellent analytical skills and a strong sense for structure and logic Ability to develop, test and validate hypotheses. Feel rewarded For starters, well offer you a comprehensive benefits package. Well value your wellbeing and support your development. And well be as flexible as we can about where and when you work finding a balance that works for all of us. Its all part of our commitment to making you feel motivated by the work you do and happy to be part of our team. For more about our work, our approach to dynamic working and how you could build your future here, visit careers.fidelityinternational.com.

Posted 1 week ago

Apply

9.0 - 14.0 years

13 - 17 Lacs

Gurugram

Work from Office

Naukri logo

About the Opportunity Job TypeApplication 20 June 2025 Title Data Scientist, Risk Data Analytics Department Data Value Location Gurgaon Reports To Associate Director, Risk Data Analytics Level 5 Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved thisBy working together - and supporting each other - all over the world. So, join our Data Value team and feel like youre part of something bigger. About Global Risk The Global Risk team in Fidelity covers the management oversight of Fidelitys risk profile, including key risk frameworks, policies and procedures and oversight and challenge processes. The team partner with the businesses to ensure Fidelity manages its risk profile within defined risk appetite. The team comprises risk specialists covering all facets of risk management, including investment, financial, non-financial and strategic risk. As part of a broader General Counsel team, the Risk team collaborates closely with Compliance, Legal, Tax and Corporate Sustainability colleagues. About Risk Data Analytics Hub The vision of Risk Data Analytics Hub is to establish a data-centric risk function that is forward-thinking, resilient, and proactive. The hubs mission is to enhance risk management processes and unlock innovative opportunities in the ever-changing risk and business landscape. The Hub has made significant strides in the Investment Risk, delivering prominent contributions such as the Fund Performance Monitoring, Fund Aggregate Exposures, Fund Market Risk, Fund Liquidity Risk, and other comprehensive monitoring and reporting dashboards. These tools have been crucial in supporting risk oversight and regulatory submissions. The Hubs goal is to scale this capability across global risk, using data-driven insights to uncover hidden patterns and predict emerging risks. This will enable decision-makers to prioritise actions that align with business objectives. The approach is to dismantle silos and foster collaboration across the global risk team, introducing new tools, techniques, and innovation themes to enhance agility. About your role You will be expected to take a leading role in developing the Data Science and Advanced Analytics solutions for our business. This will involve: Engaging with the key stakeholders to understand various subject areas in Global Risk Team including Investment Risk, Non-Financial Risk, Enterprise Risk, Model Risk, Enterprise Resilience etc. Implement advanced analytics solutions on On-Premises/Cloud platforms, develop proof-of-concepts and engage with internal and external ecosystem to progress the proof of concepts to production. Engaging and collaborating with different other internal teams like Data Lake, Data Engineering, DevOps/MLOps, Technology team etc for development of new tools, capabilities, and solutions. Maximize adoption of Cloud Based advanced analytics solutionsBuild out sandbox analytics environments using Snowflake, AWS, Adobe, Salesforce. Support delivered models and infrastructure on AWS including data changes and model tuning About you Key Responsibilities Developing and Delivering Data Science solutions for business (40%) Partner with internal (FIL teams) & external ecosystem to design and deliver advanced analytics enabled Data Science solutions. Create advanced analytics solution on quantitative and text data using Artificial Intelligence, Machine Learning and NLP techniques. Create compelling visualisations that enable the smooth consumption of predictions and insights for customer benefit. . Stakeholder Management (30%) Works with Risk SMEs/Managers, stakeholders and sponsors to understand the business problem and translate it into appropriate analytics solution. Engages with key stakeholders for smooth execution, delivery, implementation and maintenance of solutions. Adoption of Cloud enabled Data Science solutions(20%) Maximize Adoption of Cloud Based advanced analytics solution Build out sandbox analytics environments using Snowflake, AWS, Adobe, Salesforce Deploy solutions in productions while adhering to best practices involving Model Explainability, MLOps, Feature Stores, Model Management, Responsible AI etc Collaboration and Ownership (10%) Sharing of knowledge, best practices with the team including coaching or training in some of deep learning/ machine learning methodologies. Provides mentoring, coaching, and consulting advice and guidance to staff, e.g. analytic methodologies, data recommendations. Takes complete independent ownership of the projects and the initiatives in the team with the minimal support. Experience and Qualifications Required Qualifications: Engineer from IIT/Masters in field related to Data Science/Economics/Mathematics (Tie1 Institutions like ISI, Delhi School of Economics)/M.B.A from tier 1 institutions Must have Skills & Experience Required: Overall, 9+ years of experience in Data Science and Analytics 5+ years of hands-on experience in - Statistical Modelling /Machine Learning Techniques/Natural Language Processing/Deep Learning 5+ years of experience in Python/Machine Learning/Deep Learning Excellent problem-solving skills Should be able to run analytics applications such as Python, SAS and interpret statistical results Implementation of models with clear measurable outcomes Good to have Skills & Experience Required: Ability to engage in discussion with senior stakeholders on defining business problems, designing analyses projects, and articulating analytical insights to stakeholders. Experience on SPARK/Hadoop/Big Data Platforms is a plus. Experience with unstructured data and big data. Experience with secondary data and knowledge of primary market research is a plus. Ability to independently own and manage the projects with minimal support. Excellent analytical skills and a strong sense for structure and logic. Ability to develop, test and validate hypotheses.

Posted 1 week ago

Apply

5.0 - 7.0 years

5 - 8 Lacs

Pune

Work from Office

Naukri logo

Job Summary: Cummins is seeking a skilled Data Engineer to support the development, maintenance, and optimization of our enterprise data and analytics platform. This role involves hands-on experience in software development , ETL processes , and data warehousing , with strong exposure to tools like Snowflake , OBIEE , and Power BI . The engineer will collaborate with cross-functional teams, transforming data into actionable insights that enable business agility and scale. Please NoteWhile the role is categorized as remote, it will follow a hybrid work model based out of our Pune office . Key Responsibilities: Design, develop, and maintain ETL pipelines using Snowflake and related data transformation tools. Build and automate data integration workflows that extract, transform, and load data from various sources including Oracle EBS and other enterprise systems. Analyze, monitor, and troubleshoot data quality and integrity issues using standardized tools and methods. Develop and maintain dashboards and reports using OBIEE , Power BI , and other visualization tools for business stakeholders. Work with IT and Business teams to gather reporting requirements and translate them into scalable technical solutions. Participate in data modeling and storage architecture using star and snowflake schema designs. Contribute to the implementation of data governance , metadata management , and access control mechanisms . Maintain documentation for solutions and participate in testing and validation activities. Support migration and replication of data using tools such as Qlik Replicate and contribute to cloud-based data architecture . Apply agile and DevOps methodologies to continuously improve data delivery and quality assurance processes. Why Join Cummins? Opportunity to work with a global leader in power solutions and digital transformation. Be part of a collaborative and inclusive team culture. Access to cutting-edge data platforms and tools. Exposure to enterprise-scale data challenges and finance domain expertise . Drive impact through data innovation and process improvement . Competencies Data Extraction & Transformation - Ability to perform ETL activities from varied sources with high data accuracy. Programming - Capable of writing and testing efficient code using industry standards and version control systems. Data Quality Management - Detect and correct data issues for better decision-making. Solution Documentation - Clearly document processes, models, and code for reuse and collaboration. Solution Validation - Test and validate changes or solutions based on customer requirements. Problem Solving - Address technical challenges systematically to ensure effective resolution and prevention. Customer Focus - Understand business requirements and deliver user-centric data solutions. Communication & Collaboration - Work effectively across teams to meet shared goals. Values Differences - Promote inclusion by valuing diverse perspectives and backgrounds. Education, Licenses, Certifications Bachelor s or Master s degree in Computer Science, Information Systems, Data Engineering, or a related technical discipline. Certifications in data engineering or relevant tools (Snowflake, Power BI, etc.) are a plus. Experience Must have skills 5-7 years of experience in data engineering or software development , preferably within a finance or enterprise IT environment. Proficient in ETL tools , SQL , and data warehouse development . Proficient in Snowflake , Power BI , and OBIEE reporting platforms. Must have worked in implementation using these tools and technologies. Strong understanding of data warehousing principles , including schema design (star/snowflake), ER modeling, and relational databases. Working knowledge of Oracle databases and Oracle EBS structures. Preferred Skills: Experience with Qlik Replicate , data replication , or data migration tools. Familiarity with data governance , data quality frameworks , and metadata management . Exposure to cloud-based architectures, Big Data platforms (e.g., Spark, Hive, Kafka), and distributed storage systems (e.g., HBase, MongoDB). Understanding of agile methodologies (Scrum, Kanban) and DevOps practices for continuous delivery and improvement.

Posted 1 week ago

Apply

5.0 - 10.0 years

18 - 22 Lacs

Bengaluru

Hybrid

Naukri logo

We are looking for a candidate seasoned in handling Data Warehousing challenges. Someone who enjoys learning new technologies and does not hesitate to bring his/her perspective to the table. We are looking for someone who is enthusiastic about working in a team, can own and deliver long-term projects to completion. Responsibilities: • Contribute to the teams vision and articulate strategies to have fundamental impact at our massive scale. • You will need a product-focused mindset. It is essential for you to understand business requirements and architect systems that will scale and extend to accommodate those needs. • Diagnose and solve complex problems in distributed systems, develop and document technical solutions and sequence work to make fast, iterative deliveries and improvements. • Build and maintain high-performance, fault-tolerant, and scalable distributed systems that can handle our massive scale. • Provide solid leadership within your very own problem space, through data-driven approach, robust software designs, and effective delegation. • Participate in, or spearhead design reviews with peers and stakeholders to adopt what’s best suited amongst available technologies. • Review code developed by other developers and provided feedback to ensure best practices (e.g., checking code in, accuracy, testability, and efficiency) • Automate cloud infrastructure, services, and observability. • Develop CI/CD pipelines and testing automation (nice to have) • Establish and uphold best engineering practices through thorough code and design reviews and improved processes and tools. • Groom junior engineers through mentoring and delegation • Drive a culture of trust, respect, and inclusion within your team. Minimum Qualifications: • Bachelor’s degree in Computer Science, Engineering or related field, or equivalent training, fellowship, or work experience • Min 5 years of experience curating data and hands on experience working on ETL/ELT tools. • Strong overall programming skills, able to write modular, maintainable code, preferably Python & SQL • Strong Data warehousing concepts and SQL skills. Understanding of SQL, dimensional modelling, and at least one relational database • Experience with AWS • Exposure to Snowflake and ingesting data in it or exposure to similar tools • Humble, collaborative, team player, willing to step up and support your colleagues. • Effective communication, problem solving and interpersonal skills. • Commit to grow deeper in the knowledge and understanding of how to improve our existing applications. Preferred Qualifications: • Experience on following tools – DBT, Fivetran, Airflow • Knowledge and experience in Spark, Hadoop 2.0, and its ecosystem. • Experience with automation frameworks/tools like Git, Jenkins Primary Skills Snowflake, Python, SQL, DBT Secondary Skills Fivetran, Airflow,Git, Jenkins, AWS, SQL DBM

Posted 1 week ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

Overview As an Analyst, Data Modeler, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analysing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data Modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, Data Bricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper managementbusiness and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levelslow-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications Bachelors degree required in Computer Science, Data Management/Analytics/Science, Information Systems, Software Engineering or related Technology Discipline. 5+ years of overall technology experience that includes at least 2+ years of data modeling and systems architecture. Around 2+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 2+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including Dev Ops and Data Ops concepts. Familiarity with business intelligence tools (such as Power BI). Excellent verbal and written communication and collaboration skills.

Posted 1 week ago

Apply

3.0 - 5.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Role Description: We are seeking a motivated and detail-oriented QA & Test Automation Support Engineer (Test Automation Engineering) to join our Translation Hub quality assurance team. In this role, you will work closely with and under the guidance of a Product Architect and Software Engineers to support the testing and validation of Translation Hub Platform, with a focus on translation quality, validation, and API testing. You will contribute to ensuring data integrity across complex data pipelines, assist in validating business logic behind machine translations, and participate in both manual and automated testing processes. This is a hands-on, growth-oriented position ideal for someone looking to deepen their skills in software quality engineering, API testing, and test automation. Roles & Responsibilities E xecute automated test suites across various layers including translation pipelines, APIs, and proxy layers. Analyse test automation results, identify failures or inconsistencies, and assist in root cause analysis. Generate and share test execution reports with stakeholders, summarizing pass/fail rates and key issues. Collaborate with the development team to triage automation failures and escalate critical issues. Assist in test data preparation and test environment setup. Perform manual validation as needed to support automation gaps or verify edge cases. Log and track defects in JIRA (or similar tools), and follow up on resolutions with relevant teams. Help maintain test documentation, including test case updates, runbooks, and regression packs. Contribute to test automation scripting, framework maintenance, CI/CD integration and Veracode integrations. Validate translation workflows across platforms like AWS and integration with existing internal tools. Participate in testing of Fast APIs, glossary creation, post editing of MT documents with the of use translation memory. Maintain test documentation and work closely with software engineers, and analysts to ensure data quality. Must-Have Skills: Hands-on experience executing and analysing automated test suites 1+ strong experience in Test Automation specialization 3 to 5 years overall experience in QA & Test Automation is expected. Strong understanding of test result analysis and defect tracking (JIRA or similar) Basic knowledge of test automation scripting (Python, Java, or similar) Proficient in SQL for data validation Experience with API testing (REST & FastAPIs) and schema validation Exposure to cloud data platforms like AWS, Databricks, or Snowflake Understanding of CI/CD tools (e.g., Jenkins, GitLab CI) Good communication and collaboration skills Strong attention to detail and a problem-solving mindset Good-to-Have Skills: Experience with Computer aided translation tools. Contributions to internal quality dashboards or data observability systems Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 1 week ago

Apply

4.0 - 9.0 years

4 - 8 Lacs

Chennai

Work from Office

Naukri logo

Your Role As a senior software engineer with Capgemini, you should have 4 + years of experience in Snowflake Data Engineer with strong project track record In this role you will play a key role in Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence andspirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile 4+ years of experience in data warehousing, and cloud data solutions. Minimum 2+ years of hands-on experience with End-to-end Snowflake implementation. Experience in developing data architecture and roadmap strategies with knowledge to establish data governance and quality frameworks within Snowflake Expertise or strong knowledge in Snowflake best practices, performance tuning, and query optimisation. Experience with cloud platforms like AWS or Azure and familiarity with Snowflakes integration with these environments. Strong knowledge in at least one cloud (AWS or Azure) is mandatory Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 1 week ago

Apply

4.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Your Role As a senior software engineer with Capgemini, you should have 4 + years of experience in Azure Data Engineer with strong project track record In this role you will play a key role in Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence andspirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile Experience with Azure Data Bricks, Data Factory Experience with Azure Data components such as Azure SQL Database, Azure SQL Warehouse, SYNAPSE Analytics Experience in Python/Pyspark/Scala/Hive Programming. Experience with Azure Databricks/ADB Experience with building CI/CD pipelines in Data environments Primary Skills ADF (Azure Data Factory) OR ADB (Azure Data Bricks) Secondary Skills Excellent verbal and written communication and interpersonal skills Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Mumbai

Work from Office

Naukri logo

Your Role As a senior software engineer with Capgemini, you should have 4 + years of experience in GCP Data Engineer with strong project track record In this role you will play a key role in Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence andspirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile Minimum 4 years' experience in GCP Data Engineering. Strong data engineering experience using Java or Python programming languages or Spark on Google Cloud. Strong data engineering experience using Java or Python programming languages or Spark on Google Cloud. Should have worked on handling big data. Strong communication skills. experience in Agile methodologies ETL, ELT skills, Data movement skills, Data processing skills. Certification on Professional Google Cloud Data engineer will be an added advantage. Proven analytical skills and Problem-solving attitude Ability to effectively function in a cross-teams environment. Primary Skills GCP, data engineering.Java/ Python/ Spark on GCP, Programming experience in any one language - either Python or Java or PySpark. GCS (Cloud Storage), Composer (Airflow) and BigQuery experience. Experience building data pipelines using above skills Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 1 week ago

Apply

2.0 - 7.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

About the Position This is an opportunity for Engineering Managers to join our Data Platform organization that is passionate about scaling high volume, low-latency, distributed data-platform services & data products. In this role, you will get to work with engineers throughout the organization to build foundational infrastructure that allows Okta to scale for years to come. As the manager of the Data Foundations team in the Data Platform Group, your team will be responsible for designing, building, and deploying the foundational systems that power our data analytics and ML. Our analytics infrastructure stack sits on top of many modern technologies, including Kinesis, Flink, ElasticSearch, and Snowflake and are now looking to adopt GCP. We are seeking an Engineering Manager with a strong technical background and excellent communication skills to join us and partner with senior leadership as a thought leader in our strategic Data & ML projects. Our platform projects have a directive from engineering leadership to make OKTA a leader in the use of data and machine learning to improve end-user security and to expand that core-competency across the rest of engineering. You will have a sizable impact on the direction, design & implementation of the data solutions to these problems. What you will be doing: Recruit and mentor a globally distributed and talented group of diverse employees Collaborate with Product, Design, QA, Documentation, Customer Support, Program Management, TechOps, and other scrum teams. Engage in technical design and discussions and also help drive technical architecture Ensure the happiness and productivity of the team s software engineers Communicate the vision of our product to external entities Help mitigate risk (technical, product, personnel) Utilize professional acumen to improve Okta s technology, product, and engineering Participate in relevant Engineering workgroups and on-call rotations Foster, enable and promote innovation Define team metrics and meet productivity goals of the organization Cloud infrastructure cost tracking and management in partnership with Okta s FinOps team What you will bring to the role: A track record of leading or managing high performing platform teams (2 year experience minimum) Experience with end to end project delivery; building roadmaps through operational sustainability Strong facilitation skills (design, requirements gathering, progress and status sessions) Production experience with distributed systems running in AWS. GCP a bonus Passion about automation and leveraging agile software development methodologies Prior experience with Data Platform Prior experience in software development with hands-on experience as an IC using a cloud-based distributed computing technologies including Messaging systems such as Kinesis, Kafka Data processing systems like Flink, Spark, Beam Storage & Compute systems such as Snowflake, Hadoop Coordinators and schedulers like the ones in Kubernetes, Hadoop, Mesos Developing and tuning highly scalable distributed systems Experience with reliability engineering specifically in areas such as data quality, data observability and incident management And extra credit if you have experience in any of the following! Deep Data & ML experience Multi-cloud experience Federal cloud environments / Fedramp Contributed to the development of distributed systems or used one or more at high volume or criticality such as Kafka or Hadoop

Posted 1 week ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for an experienced Salesforce Marketing Cloud Developer . The role requires hands-on expertise in Salesforce Marketing Cloud, especially in MCP and Interaction Studio. Responsibilities include designing scalable, secure, and maintainable end-to-end solutions across customer engagement channels, ensuring seamless data integration with platforms like Snowflake, Data Cloud, Java-based CRM, and Adobe Analytics. The developer will conduct code reviews, enforce development best practices, and provide technical guidance. Coordination with enterprise architecture teams and review boards to ensure alignment with strategic goals is also expected. Additional knowledge of CDP/Data Cloud, Mobile & Web Studio, and Java integrations is desirable.

Posted 1 week ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for an experienced Analytics Engineer to join Okta s enterprise data team. This analyst will have strong background in SaaS subscription and product analytics, a passion for providing customer usage insights to internal stakeholders, and experience organizing complex data into consumable data assets. In this role, you will be focusing on subscription analytics and product utilization insights and will partner with Product, Engineering, Customer Success, and Pricing to implement enhancements and build end-to-end customer subscription insights into new products. Requirements Experience in customer analytics, product analytics, and go-to-market analytics Experience in SaaS business, Product domain as well as Salesforce Proficiency in SQL, ETL tools, GitHub, and data integration technologies, including familiarity with data modeling techniques, database design, and query optimization. Experience in data languages like R and Python. Knowledge of data processing frameworks like PySpark is also beneficial. Experience working with cloud-based data solutions like AWS or Google Cloud Platform and cloud-based data warehousing tools like Snowflake. Strong analytical and problem-solving skills to understand complex data problems and provide effective solutions. Experience in building reports and visualizations to represent data in Tableau or Looker Ability to effectively communicate with stakeholders, and work cross-functionally and communicate with technical and non-technical teams Familiarity with SCRUM operating model and tracking work via a tool such as Jira 6+ years in data engineering, data warehousing, or business intelligence BS in computer science, data science, statistics, mathematics, or a related field Responsibilities Engage with Product and Engineering to implement product definitions into subscription and product analytics, building new insights and updates to existing key data products Analyze a variety of data sources, structures, and metadata and develop mapping, transformation rules, aggregations and ETL specifications Configure scalable and reliable data pipelines to consume, integrate and analyze large volumes of complex data from different sources to support the growing needs of subscription and product analytics Partner with internal stakeholders to understand user needs and implement user feedback, and develop reporting and dashboards focused on subscription analytics Work closely with other Analytics team members to optimize data self service, reusability, performance, and ensure validity of source of truth Enhance reusable knowledge of the models and metrics through documentation and use of the data catalog Ensure data security and compliance by implementing appropriate data access controls, encryption, and auditing mechanisms. Take ownership of successful completion for project activities Nice to Have Experience in data science, AI/ML concepts and techniques.

Posted 1 week ago

Apply

3.0 - 8.0 years

13 - 23 Lacs

Gurugram

Work from Office

Naukri logo

Job Title: Data Engineer Location: [Gurugram Experience: [3-8 years] Job Type: [Full-time ] About the Role We are looking for a skilled Data Engineer to join our team. The ideal candidate will have hands-on experience designing, building, and maintaining scalable data pipelines using PySpark, SQL, and cloud technologies like AWS and Snowflake. You will work closely with data scientists, analysts, and other stakeholders to deliver reliable, high-performance data solutions. Key Responsibilities Design, develop, and optimize scalable ETL/ELT pipelines using PySpark and SQL for processing large datasets. Build and maintain data warehouses and data lakes on Snowflake and AWS. Implement data ingestion, transformation, and integration from diverse sources. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Monitor and troubleshoot pipeline performance, ensuring data quality and reliability. Automate data workflows and optimize data storage for cost and efficiency. Stay up-to-date with industry best practices and emerging technologies in data engineering. Required Skills & Qualifications Strong experience with PySpark for big data processing and ETL pipeline development. Proficient in writing complex SQL queries and optimizing them for performance. Hands-on experience with AWS services such as S3, Glue, Lambda, EC2, and Redshift. Expertise in designing and managing data warehousing solutions using Snowflake . Familiarity with data modeling, schema design, and data governance. Experience with version control systems (e.g., Git) and CI/CD pipelines. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Preferred Qualifications Experience with orchestration tools like Apache Airflow. Knowledge of Python scripting beyond PySpark. Understanding of data security and compliance standards. Experience with containerization tools like Docker and Kubernetes. Education Bachelors or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.

Posted 1 week ago

Apply

8.0 - 13.0 years

20 - 32 Lacs

Bengaluru

Hybrid

Naukri logo

Job Title: Senior Data Engineer Experience: 9+ Years Location: Whitefield, Bangalore Notice Period: Serving or Immediate joiners. Role & Responsibilities: Design and implement scalable data pipelines for ingesting, transforming, and loading data from diverse sources and tools. Develop robust data models to support analytical and reporting requirements. Automate data engineering processes using appropriate scripting languages and frameworks. Collaborate with engineers, process managers, and data scientists to gather requirements and deliver effective data solutions. Serve as a liaison between engineering and business teams on all data-related initiatives. Automate monitoring and alerting for data pipelines, products, and dashboards; provide support for issue resolution including on-call responsibilities. Write optimized and modular SQL queries, including view and table creation as required. Define and implement best practices for data validation, ensuring alignment with enterprise standards. Manage QA data environments, including test data creation and maintenance. Qualifications: 9+ years of experience in data engineering or a related field. Proven experience with Agile software development practices. Strong SQL skills and experience working with both RDBMS and NoSQL databases. Hands-on experience with cloud-based data warehousing platforms such as Snowflake and Amazon Redshift . Proficiency with cloud technologies, preferably AWS . Deep knowledge of data modeling , data warehousing , and data lake concepts. Practical experience with ETL/ELT tools and frameworks. 5+ years of experience in application development using Python , SQL , Scala , or Java . Experience in working with real-time data streaming and associated platforms. Note: The professional should be based out of Bangalore, as one technical round has to be taken F2F from Bellandur, Bangalore office.

Posted 1 week ago

Apply

5.0 - 9.0 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

Key Result Areas and Activities: Design, develop and deploy ETL/ELT solutions on premise or in the cloud Transformation of data with stored procedures Report Development (MicroStrategy/Power BI) Create and maintain comprehensive documentation for data pipelines, configurations, and processes Ensure data quality and integrity through effective data management practices Monitor and optimize data pipeline performance Troubleshoot and resolve data-related issues Technical Experience: Must Have Good experience in Azure Synapse Good experience in ADF Good experience in Snowflake & Stored Procedures Experience with ETL/ELT processes, data warehousing, and data modelling Experience with data quality frameworks, monitoring tools, and job scheduling Knowledge of data formats like JSON, XML, CSV, and Parquet English Fluent (Strong written, verbal, and presentation skills) Agile methodology & tools like JIRA Good communication and formal skills Good To Have : Good experience in MicroStrategy and PowerBI Experience in scripting languages such as Python, Java, or Shell scripting Familiarity with Azure cloud platforms and cloud data services Qualifications : Bachelors or Masters degree in Computer Science, Engineering, or a related field 3+ years of experience in Azure Synapse Qualities: Experience with or knowledge of Agile Software Development methodologies Can influence and implement change; demonstrates confidence, strength of conviction and sound decisions. Believes in head-on dealing with a problem; approaches in logical and systematic manner; is persistent and patient; can independently tackle the problem, is not over-critical of the factors that led to a problem and is practical about it; follow up with developers on related issues. Able to consult, write, and present persuasively

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 16 Lacs

Navi Mumbai, Pune, Mumbai (All Areas)

Hybrid

Naukri logo

Snowflake admin having minimum 3 yrs exp

Posted 1 week ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Pune, Gurugram

Work from Office

Naukri logo

What will your job look like? Work with business teams and data analysts to understand business requirements. Design and Development of Cloud solutions using Databricks Spark or Snowflake to support efficient data analytic models: Create Derived and Business ready Datasets, Extracts and system integration with open source tools. Production Implementation and production support of Big Data solutions; Investigate and troubleshoot production issues and provide fixes. Take ownership of tasks and proactively identify and communicate any potential issues/risks and impacts. Analyze, design and support various change requests/fast-track requirements, also understand and adopt rapid changing business requirements. All you need is... Min. Bachelor's degree in Science/IT/Computing or equivalent. 3-7 years total experience in development mainly around Scala or Python and all related technologies. Proficiency in Spark 2.x applications in Scala or Python. Proficiency in writing Hive SQL batch jobs and scripting. 3+ Experience in developing applications in the Databricks or Certified in Databricks Developer skills. Or has 3+ years experience in Snowflake Led Design and development for Databricks or Cloud projects Strong experience in scripting (Shell or Python) Strong experience in SQL based Data Analytical skills. Preferred relevant experience in Cloud projects is plus. Preferred Experience with Streaming on Kafka is plus. Preferred experience with developing applications using Apache Iceberg is a plus. Hadoop/Spark/Java/ Azure certifications is a plus. Excellent written and verbal communication - to communicate with Development and Project Management Leadership. Excellent collaboration and teamwork skills to work within AMDOCS, Client and other 3rd party vendors. Why you will love this job: The chance to serve as a specialist in software and technology. You will take an active role in technical mentoring within the team. We provide stellar benefits from health to dental to paid time off and parental leave!

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Hybrid

Naukri logo

About the Team The Data Platform team is responsible for the foundational data services, systems, and data products for Okta that benefit our users. Today, the Data Platform team solves challenges and enables: Streaming analytics Interactive end-user reporting Data and ML platform for Okta to scale Telemetry of our products and data Our elite team is fast, creative and flexible. We encourage ownership. We expect great things from our engineers and reward them with stimulating new projects, new technologies and the chance to have significant equity in a company. Okta is about to change the cloud computing landscape forever. About the Position This is an opportunity for experienced Software Engineers to join our fast growing Data Platform organization that is passionate about scaling high volume, low-latency, distributed data-platform services & data products. In this role, you will get to work with engineers throughout the organization to build foundational infrastructure that allows Okta to scale for years to come. As a member of the Data Platform team, you will be responsible for designing, building, and deploying the systems that power our data analytics and ML. Our analytics infrastructure stack sits on top of many modern technologies, including Kinesis, Flink, ElasticSearch, and Snowflake. We are looking for experienced Software Engineers who can help design and own the building, deploying and optimizing the streaming infrastructure. This project has a directive from engineering leadership to make OKTA a leader in the use of data and machine learning to improve end-user security and to expand that core-competency across the rest of engineering. You will have a sizable impact on the direction, design & implementation of the solutions to these problems. Job Duties and Responsibilities: Design, implement and own data-intensive, high-performance, scalable platform components Work with engineering teams, architects and cross functional partners on the development of projects, design, and implementation Conduct and participate in design reviews, code reviews, analysis, and performance tuning Coach and mentor engineers to help scale up the engineering organization Debug production issues across services and multiple levels of the stack Required Knowledge, Skills, and Abilities: 5+ years of experience in object-oriented language, preferably Java Hands-on experience using a cloud-based distributed computing technologies including Messaging systems such as Kinesis, Kafka Data processing systems like Flink, Spark, Beam Storage & Compute systems such as Snowflake, Hadoop Coordinators and schedulers like the ones in Kubernetes, Hadoop, Mesos Experience in developing and tuning highly scalable distributed systems Excellent grasp of software engineering principles Solid understanding of multithreading, garbage collection and memory management Experience with reliability engineering specifically in areas such as data quality, data observability and incident management Nice to have Maintained security, encryption, identity management, or authentication infrastructure Leveraged major public cloud providers to build mission-critical, high volume services Hands-on experience in developing Data Integration applications for large scale (petabyte scale) environments with experience in both batch and online systems. Contributed to the development of distributed systems or used one or more at high volume or criticality such as Kafka or Hadoop Experience developing Kubernetes based services on AWS Stack.

Posted 1 week ago

Apply

Exploring Snowflake Jobs in India

Snowflake has become one of the most sought-after skills in the tech industry, with a growing demand for professionals who are proficient in handling data warehousing and analytics using this cloud-based platform. In India, the job market for Snowflake roles is flourishing, offering numerous opportunities for job seekers with the right skill set.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Chennai

These cities are known for their thriving tech industries and have a high demand for Snowflake professionals.

Average Salary Range

The average salary range for Snowflake professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Career Path

A typical career path in Snowflake may include roles such as: - Junior Snowflake Developer - Snowflake Developer - Senior Snowflake Developer - Snowflake Architect - Snowflake Consultant - Snowflake Administrator

Related Skills

In addition to expertise in Snowflake, professionals in this field are often expected to have knowledge in: - SQL - Data warehousing concepts - ETL tools - Cloud platforms (AWS, Azure, GCP) - Database management

Interview Questions

  • What is Snowflake and how does it differ from traditional data warehousing solutions? (basic)
  • Explain how Snowflake handles data storage and compute resources in the cloud. (medium)
  • How do you optimize query performance in Snowflake? (medium)
  • Can you explain how data sharing works in Snowflake? (medium)
  • What are the different stages in the Snowflake architecture? (advanced)
  • How do you handle data encryption in Snowflake? (medium)
  • Describe a challenging project you worked on using Snowflake and how you overcame obstacles. (advanced)
  • How does Snowflake ensure data security and compliance? (medium)
  • What are the benefits of using Snowflake over traditional data warehouses? (basic)
  • Explain the concept of virtual warehouses in Snowflake. (medium)
  • How do you monitor and troubleshoot performance issues in Snowflake? (medium)
  • Can you discuss your experience with Snowflake's semi-structured data handling capabilities? (advanced)
  • What are Snowflake's data loading options and best practices? (medium)
  • How do you manage access control and permissions in Snowflake? (medium)
  • Describe a scenario where you had to optimize a Snowflake data pipeline for efficiency. (advanced)
  • How do you handle versioning and change management in Snowflake? (medium)
  • What are the limitations of Snowflake and how would you work around them? (advanced)
  • Explain how Snowflake supports semi-structured data formats like JSON and XML. (medium)
  • What are the considerations for scaling Snowflake for large datasets and high concurrency? (advanced)
  • How do you approach data modeling in Snowflake compared to traditional databases? (medium)
  • Discuss your experience with Snowflake's time travel and data retention features. (medium)
  • How would you migrate an on-premise data warehouse to Snowflake in a production environment? (advanced)
  • What are the best practices for data governance and metadata management in Snowflake? (medium)
  • How do you ensure data quality and integrity in Snowflake pipelines? (medium)

Closing Remark

As you explore opportunities in the Snowflake job market in India, remember to showcase your expertise in handling data analytics and warehousing using this powerful platform. Prepare thoroughly for interviews, demonstrate your skills confidently, and keep abreast of the latest developments in Snowflake to stay competitive in the tech industry. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies