Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5 - 7 years
0 - 0 Lacs
Bengaluru
Work from Office
Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: Data Analysis and Modeling: - Perform exploratory data analysis (EDA) to uncover insights and inform model development. - Develop, validate, and deploy machine learning models using Python and relevant libraries (e.g., scikit-learn, TensorFlow, PyTorch). - Implement statistical analysis and hypothesis testing to drive data-driven decision-making. Data Engineering: - Design, build, and maintain scalable data pipelines to process and transform large datasets. - Collaborate with data engineers to ensure data quality and system reliability. - Optimize data storage solutions for efficient querying and analysis. Software Development: - Write clean, maintainable, and efficient code in Python. - Develop APIs and integrate machine learning models into production systems. - Implement best practices for version control, testing, and continuous integration. Gene AI: - Utilize Gene AI tools and frameworks to enhance data analysis and model development. - Integrate Gene AI solutions into existing workflows and systems. - Stay updated with the latest advancements in Gene AI and apply them to relevant projects. Collaboration and Communication: - Work closely with cross-functional teams to understand business needs and translate them into technical requirements. - Communicate findings and insights to both technical and non-technical stakeholders. - Provide mentorship and guidance to junior team members. Required Skills Python,Api,Tensorflow,Gene AI
Posted 2 months ago
1 - 3 years
2 - 4 Lacs
Chennai
Work from Office
Key Responsibilities: Design and implement scalable, secure, and high-performance data architectures. Define and enforce data modelling standards, best practices, and data governance policies. Develop data strategies that align with business objectives and future growth. Design, optimize, and maintain relational and NoSQL databases (e.g., PostgreSQL, MySQL, ClickHouse, MongoDB). Implement and manage data warehouses and data lakes for analytics and reporting (e.g., Snowflake, BigQuery, Redshift). Ensure efficient ETL/ELT processes for data integration and transformation. Define and enforce data security policies, access controls, and compliance with regulations (GDPR, HIPAA, etc.). Implement data lineage, data cataloging, and metadata management solutions. Work closely with data engineers, analysts, and business teams to understand data requirements. Provide technical guidance and mentorship to data teams. Collaborate with IT and DevOps teams to ensure seamless integration of data solutions. Optimize query performance, indexing strategies, and storage solutions. Evaluate and integrate emerging technologies such as AI/ML-driven data processing and real-time analytics.
Posted 2 months ago
6 - 10 years
10 - 20 Lacs
Pune, Hyderabad, Kolkata
Hybrid
Snowflake Data Engineer Overall Experience : 6+ years of experience in Snowflake and Python. Knowledge of Power BI is added advantage Experience of 5+ years in data preparation. BI projects to understand business requirements in BI context and understand data model to transform raw data into meaningful data using snowflake and Python. Designing and creating data models that define the structure and relationships of various data elements within the organization. This includes conceptual, logical, and physical data models, which help ensure data accuracy, consistency, and integrity. Designing data integration solutions that allow different systems and applications to share and exchange data seamlessly. This may involve selecting appropriate integration technologies, developing ETL (Extract, Transform, Load) processes, and ensuring data quality during the integration process. Create and maintain optimal data pipeline architecture. Good knowledge of cloud platforms like AWS/Azure/GCP Good hands-on knowledge of Snowflake is a must. Experience with various data ingestion methods (Snow pipe & others), time travel and data sharing and other Snowflake capabilities Good knowledge of Python/Py Spark, advanced features of Python Support business development efforts (proposals and client presentations). Knowledge on EBS Modules like Finance, HCM, Procurement will be an added advantage. Ability to thrive in a fast-paced, dynamic, client-facing role where delivering solid work products to exceed high expectations is a measure of success. Excellent leadership and interpersonal skills. Eager to contribute to a team-oriented environment. Strong prioritization and multi-tasking skills with a track record of meeting deadlines. Ability to be creative and analytical in a problem-solving environment. Effective verbal and written communication skills. Adaptable to new environments, people, technologies, and processes Ability to manage ambiguity and solve undefined problems Role & responsibilities
Posted 2 months ago
6 - 10 years
11 - 21 Lacs
Pune, Hyderabad, Kolkata
Hybrid
Data Warehousing Concepts Data Transformation MS SQL Server Performance tuning ETL - SSIS Snowflake Reporting/ Dashboarding - PowerBI - SSRS - Tableau Role & responsibilities
Posted 2 months ago
5 - 8 years
15 - 20 Lacs
Delhi NCR, Gurgaon, Noida
Work from Office
Job description Required Skills and Qualifications: 7+ years of experience in Snowflake development, including data modeling, stored procedures, and performance optimization. Strong experience with Azure Cloud Services, including Azure Data Factory (ADF), Azure Storage, and Azure SQL. Proficiency in ETL/ELT processes and data pipeline orchestration. Solid understanding of data warehousing concepts, dimensional modeling, and best practices. Experience with SQL, Python, or Scala for data transformation and scripting. Preferred Qualifications: Snowflake Certification (SnowPro Core or Advanced). Experience with Power BI, Databricks, or Synapse Analytics. Knowledge of streaming data tools (Azure Event Hub, Kafka).
Posted 2 months ago
7 - 12 years
20 - 30 Lacs
Hyderabad
Hybrid
Proficiency in Linux fundamentals and Bash scripting skills. Programming expertise in one or more languages, mainly: Python, Go, Scala, C++, Kotlin Expertise on Python libraries - Pandas, Numpy, PySpark, Dask. In-Depth Knowledge of Algorithms and Data Structures Deep understand of database systems e.g., PgSQL/MySQL and Microsoft SQL server Experience with at least one cloud platforms e.g., AWS, Azure, GCP Experience with one or more Datalakes/Datawarehouses - Snowflake / DataBricks / Redshift etc Experience in Stream processing - Kafka, Kineses etc Experienced in the implementation of Data warehousing solutions Experienced in the implementation of API solutions and tooling
Posted 2 months ago
5 - 10 years
15 - 20 Lacs
Hyderabad
Hybrid
Hello, Urgent job openings for Senior Cloud DevOps Engineer @ GlobalData(Hyd). Job Description given below please go through to understand requirement. if requirement is matching to your profile share your updated resume @ mail id (m.salim@globaldata.com). Mention Subject Line :- Applying for Senior Cloud DevOps Engineer @ GlobalData(Hyd) Share your details in the mail :- Full Name : Mobile # : Qualification : Company Name : Designation : Total Work Experience Years : Current CTC : Expected CTC : Notice Period : Current Location/willing to relocate to Hyd? : Office Address : 3rd Floor, Jyoti Pinnacle Building, Opp to Prestige IVY League Appt, Kondapur Road, Hyderabad, Telangana-500081. Job Description :- Senior Cloud DevOps Engineer We are seeking a highly skilled Senior Cloud DevOps Engineer to join our team and play a pivotal role in the development and maintenance of our robust and scalable infrastructure. You will be responsible for managing our cloud-based infrastructure especially AWS, ensuring optimal performance, security, and reliability. Key Tasks and Responsibilities: Infrastructure Management: Provision and manage infrastructure on Red Hat OpenShift Service (ROSA) on AWS. Configure and manage RabbitMQ, RDS, EC2, Amazon EMR, Apache Solr, and Squid Proxy. Optimize infrastructure performance and resource utilization. Implement automation tools to streamline infrastructure provisioning and management. Cloud Platform Expertise: Leverage expertise in AWS, Snowflake, Google Cloud, Azure, and Databricks to architect and implement cloud-native solutions. Migrate and optimize workloads between different cloud platforms. Implement cost-effective strategies for cloud resource utilization. Security and Compliance: Implement security best practices to protect infrastructure and data. Monitor and respond to security threats and vulnerabilities. Ensure compliance with industry standards and regulations. Monitoring and Troubleshooting: Implement robust monitoring tools to track system performance and identify issues. Troubleshoot and resolve infrastructure and application issues. Optimize system performance and availability. Required Skills and Experience: Strong proficiency in Red Hat OpenShift Service (ROSA) on AWS. Deep understanding of cloud computing concepts and technologies (AWS, Azure, GCP). Expertise in containerization technologies (Docker, Kubernetes). Knowledge in scripting languages (Python, Bash). Experience with configuration management tools Knowledge of database technologies (MySQL). Experience with message queues (RabbitMQ). Strong problem-solving and troubleshooting skills. Excellent communication and collaboration skills. Thanks & Regards, Salim (Human Resources)
Posted 2 months ago
7 - 11 years
9 - 13 Lacs
Chennai
Work from Office
Role Overview: We are seeking a highly skilled and motivated Senior Data Engineer to join our growing data team. In this role, you will be responsible for designing, building, and maintaining scalable and robust data pipelines and infrastructure, leveraging Snowflake, DBT, and cloud platforms (AWS, Azure, or GCP). You will play a crucial role in enabling data-driven decision-making by ensuring the availability, quality, and performance of our data assets. Responsibilities: Data Pipeline Development and Management: Design, develop, and maintain efficient and scalable data pipelines for ingesting, transforming, and loading data from various sources into Snowflake. Implement data quality checks and monitoring to ensure data accuracy and reliability. Optimize data pipelines for performance and cost-effectiveness. Automate data pipeline deployments and monitoring using CI/CD principles. Snowflake Expertise: Develop and optimize Snowflake schemas, tables, and views for efficient data storage and retrieval. Implement Snowflake best practices for data warehousing and data lake architectures. Optimize Snowflake queries and performance tuning. Manage Snowflake security and access controls. DBT Implementation: Design and implement data transformations using DBT for data modeling and analytics. Develop and maintain DBT models, tests, and documentation. Implement data quality checks and validation using DBT. Utilize DBT for data lineage and dependency management. Cloud Platform Expertise (AWS, Azure, or GCP): Leverage cloud services for data storage, processing, and infrastructure management. Design and implement cloud-based data solutions using services such as [Specific cloud services relevant to your company, e.g., AWS S3, Azure Data Lake Storage, GCP Cloud Storage, AWS Glue, Azure Data Factory, GCP Dataflow]. Manage cloud infrastructure and resources for data pipelines and warehousing. Implement cloud security best practices. Collaboration and Communication: Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and deliver solutions. Communicate technical concepts and solutions effectively to both technical and non-technical audiences. Participate in code reviews and contribute to team knowledge sharing. Document data pipelines and data models. Performance and Optimization: Monitor pipeline performance and implement optimizations where necessary. Identify and resolve bottlenecks within the data infrastructure. Implement best practices for data storage and retrieval to increase efficiency. Required Skills and Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5+ years of experience as a Data Engineer. Strong proficiency in SQL and data warehousing concepts. Extensive experience with Snowflake. Strong experience with DBT for data transformation and modeling. Experience with at least one major cloud platform (AWS, Azure, or GCP). Experience with data pipeline development and automation. Proficiency in scripting languages such as Python. Experience with version control systems (e.g., Git). Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Experience with CI/CD pipelines.
Posted 2 months ago
11 - 14 years
32 - 47 Lacs
Bengaluru
Work from Office
We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000 experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That is where you come in! REQUIREMENTS: Total experience 11+years Experience in data engineering and database management. Expert knowledge in PostgreSQL (preferably cloud-hosted on AWS, Azure, or GCP). Experience with Snowflake Data Warehouse and strong SQL programming skills. Deep understanding of stored procedures, performance optimization, and handling large-scale data. Knowledge of ingestion techniques, data cleaning, de-duplication, and partitioning. Strong understanding of index design and performance tuning techniques. Familiarity with SQL security techniques, including data encryption, Transparent Data Encryption (TDE), signed stored procedures, and user permission assignments. Competence in data preparation and ETL tools to build and maintain data pipelines and flows. Experience in data integration by mapping various source platforms into Entity Relationship Models (ERMs). Exposure to source control systems like Git, Azure DevOps. Expertise in Python and Machine Learning (ML) model development. Experience in automated testing and test coverage tools. Hands-on experience in CI/CD automation tools Programming experience in Golang Understanding of Agile methodologies (Scrum, Kanban). Ability to collaborate with stakeholders across Executive, Product, Data, and Design teams. RESPONSIBILITIES: Design and maintain an optimal data pipeline architecture. Assemble large, complex data sets to meet functional and non-functional business requirements. Develop pipelines for data extraction, transformation, and loading (ETL) using SQL and cloud database technologies. Prepare and optimize ML models to improve business insights. Support stakeholders by resolving data-related technical issues and enhancing data infrastructure. Ensure data security across multiple data centers and regions, maintaining compliance with national and international data laws. Collaborate with data and analytics teams to enhance data systems functionality. Conduct exploratory data analysis to support database and dashboard development.
Posted 2 months ago
5 - 10 years
10 - 15 Lacs
Andhra Pradesh
Work from Office
Job Summary: Technical Experience Should have experience on working in Datawarehouse/ETL Testing Big data testing and BI testing in Snowflake Azure ADF ALS Snowpark and Synapse Experience in Writing complex Spark SQL queries/ Hive QL SQL performance tuning and validation of DWH Experience in data analysis including profiling auditing balancing and reconciliation. Experience in load testing of ETL pipeline and backend system with performance tuning indexes partitions query tuning) Experience in Snowflake Azure ADF Azure Databricks Experience in Test Management and defect management tools Design Test cases Test Data and perform ETL test execution and reporting Experience in Unix and Proficiency in Python programming language and its data manipulation libraries (e.g. pandas NumPy). Participate in defect triage and track the defect for resolution Work with Developers support offshore/onshore for ETL & BI data loading and test activities Communication Skills Excellent communication and collaboration skills to work effectively with team members and stakeholders. Must have experience in presenting technical topics Preferred Skills: Desirable to possess technical experience/knowledge in the ETL/DWH and migration testing and Data Test automation using python
Posted 2 months ago
6 - 11 years
20 - 35 Lacs
Delhi NCR, Bengaluru, Mumbai (All Areas)
Hybrid
Design, code, and develop new features/fix bugs/add enhancements. Evaluate, Install, Setup, Maintain and Upgrade, Data Engineering, Machine Learning and CI/CD infrastructure tools hosted on Cloud (GCP) Required Candidate profile Must have - Python, Bigquery, PySpark, GCP, SQL Exp with GCP I.E BigQuery,Cloud Storage,Dataflow. Exp in ETL processes and data integration on GCP. Strong SQL/database querying optimization on GCP.
Posted 2 months ago
5 - 10 years
15 - 25 Lacs
Gurgaon
Remote
Job Role:- Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Strong candidate with mini. 8 years of experience in Power BI experience. Technology: Power BI with exposure to Snowflake Database. Understand business requirements in the BI context. Design data models to convert raw data to meaningful insights using Power BI. Create dashboards and visual interactive reports using Power BI. Perform DAX queries and functions in Power BI. Design, develop, and deploy Power BI scripts and perform efficient detailed analysis. Define and design new systems by analyzing the current processes. Make technical changes to existing BI systems in order to enhance their functioning. Strong query writing and performance tuning skills. Strong research and analytical skills. Strong communication skills Benefits: Competitive salary and benefits package. Opportunity to work on a variety of challenging and rewarding projects. Work in a dynamic and collaborative environment.
Posted 2 months ago
5 - 10 years
16 - 30 Lacs
Bengaluru
Hybrid
ANKO Job Description: 5-7 years of total IT experience in the area of Data Engineering 3+ years of experience with AWS services, such as IAM, API Gateway, EC2, S3, EMR, Lambda, EKS etc. 2+ years of experience creating and deploying docker containers on Kubernetes; experince in building CI/CD pipeline tools such as Jenkins and Github actions Strong programming skills in Python, Java and Scala Strong experience with SQL and No SQL databases Knowledge of data modelling and database design principles Familiarity with Data Governance and security best practices, knowledge of Agile methodologies. Experience with Git and other version control systems Job Description: 5-7 years of total IT experience in the area of Data Engineering 3+ years of experience with AWS services, such as IAM, API Gateway, EC2, S3, EMR, Lambda, EKS etc. 2+ years of experience creating and deploying docker containers on Kubernetes; experince in building CI/CD pipeline tools such as Jenkins and Github actions Strong programming skills in Python, Java and Scala Strong experience with SQL and No SQL databases Knowledge of data modelling and database design principles Familiarity with Data Governance and security best practices, knowledge of Agile methodologies. Experience with Git and other version control systems
Posted 2 months ago
2 - 5 years
4 - 7 Lacs
Mohali
Work from Office
We are seeking an experienced and talented Data Engineer to join our growing team. As a Data Engineer, you will be responsible for designing, implementing, and optimizing data solutions leveraging Snowflake. The ideal candidate should have a deep understanding of data engineering principles, expertise in Snowflake architecture, and a passion for building scalable and efficient data pipelines. Responsibilities : Design and implement scalable data architectures using Snowflake, ensuring optimal performance, reliability, and scalability. Work to define best practices for data modeling, storage, and processing within the Snowflake environment. Develop, implement, and maintain robust ETL processes using Snowflake's native features and tools in collaboration with integration developers using an IPAAS like SnapLogic. Collaborate with data consumers to understand data requirements, ensure the availability of high-quality, well-organized data, and deliver effective solutions. Optimize and tune Snowflake data warehouses to achieve optimal query performance and resource utilization. Design and implement effective data models that align with business requirements and industry best practices. Ensure data models are optimized for Snowflake's unique capabilities, such as automatic clustering and metadata indexing. Implement and enforce data security measures in accordance with industry standards and organizational policies. Develop and maintain automated processes for data pipeline orchestration, monitoring, and alerting. Proactively identify and address performance bottlenecks and data inconsistencies. Create and maintain comprehensive documentation for data engineering processes, workflows, and Snowflake configurations. Qualifications : Bachelor's degree in computer science, Information Technology, or a related field. Proven experience as a Data Engineer with a focus on Snowflake. Strong SQL proficiency and expertise in designing and optimizing complex queries. In-depth knowledge of Snowflake architecture, features, and best practices. Experience with scripting languages (Python or equivalent) for automation and data manipulation. Familiarity with data warehousing concepts, cloud computing, and distributed systems. Preferred Skills : Snowflake certifications (e.g., SnowPro Core, SnowPro Advanced). Experience with other cloud platforms (AWS, Azure, GCP). Familiarity with data versioning and lineage tracking.
Posted 2 months ago
8 - 13 years
15 - 30 Lacs
Pune, Trivandrum
Work from Office
Role & responsibilities • 8+ years of experience in enterprise data models, data structures, data engineering, or database design, management experience is a plus • Proficiency in data modelling for data from different domains( Data Vault, 3NF, Snowflakes, Star) • Proficient in SQL and RDBMS (e.g., MySQL, PostgreSQL, Oracle, SQL Server, Synapse). • Knowledge in Synapse analytics and large data model. • Strong experience in data warehousing and big data technologies. Data Mesh/Data Product experience a plus • Familiar with cloud data services and infrastructure (Azure, AWS) • Knowledgeable in data modeling tools (e.g, ERwin). • Excellent communication skills for both technical and non-technical audiences. • Bachelors degree or higher in Computer Science, Engineering, Information Systems, or a related field.
Posted 2 months ago
6 - 10 years
10 - 18 Lacs
Panchkula, Delhi NCR, Gurgaon
Work from Office
Hello Candidate, Greetings from Hungry Bird IT Consulting Services Pvt. Ltd.! We're hiring a Senior Data Engineer for our client. Experience:6+ years Location: Panchkula (Haryana) Qualification : Graduate Work Mode: Work from Office Workings: Monday - Friday (5 days a week) Reports to: Company COO/Management Job Overview: Job Summary: We are looking for a highly skilled and experienced Senior Data Engineer with a minimum of 7 years of experience in designing, implementing, and maintaining scalable data pipelines. The ideal candidate will have a strong technical background in building robust data architectures and systems that support large data volumes and complex business needs. As a Senior Data Engineer, you will collaborate closely with engineering, analytics, and business teams to drive data-driven decision-making and ensure the high quality and availability of production data. Key Responsibilities : Design and Maintain Data Pipelines: Build and manage scalable data pipelines, ensuring they can handle increasing data volume and complexity. Design and implement new API integrations to support evolving business needs. Collaborate with Analytics & Business Teams: Work closely with analytics and business teams to enhance data models, enabling improved access to data and fostering data-driven decisions across the organization. Monitor Data Quality: Implement systems and processes to ensure data quality, ensuring that production data remains accurate and available to key stakeholders and critical business processes. Write Tests and Documentation: Develop unit and integration tests, contribute to an engineering wiki, and maintain clear documentation to ensure project continuity and knowledge sharing. Troubleshoot Data Issues: Perform data analysis to identify and resolve data-related issues, collaborating with other teams to address challenges efficiently. Cross-Team Collaboration: Work collaboratively with frontend and backend engineers, product managers, and analysts to build and enhance data-driven solutions. Design Data Integrations & Quality Frameworks: Develop and implement data integration strategies and frameworks for maintaining high standards of data quality. Long-Term Data Strategy: Collaborate with business and engineering teams to shape and execute a long-term strategy for data platform architecture, ensuring scalability, reliability, and efficiency. What Youll Need: Minimum of 7 years of experience in designing, implementing, and maintaining scalable data pipelines. Expert-level proficiency in Python, PySpark, SQL, and AWS technologies. Experience with NoSQL and SQL data platforms such as Snowflake, Vertica, PostgreSQL, DynamoDB/MongoDB Proficiency in building data pipelines using Apache Kafka messaging platforms. Strong understanding of data architecture and integration strategies in large-scale environments. Ability to work across cross-functional teams to deliver high-impact data solutions. Strong communication skills, with the ability to interact with both technical and non-technical stakeholders. Ability to handle complex technical challenges and provide practical solutions in high-traffic, mission- critical environments Experience with Java/Scala is a plus Retail and eCommerce domain expertise is highly desirable. What We Offer: A dynamic and inclusive work environment. Opportunities for growth and professional development. Hands-on training and mentorship from experienced professionals. Exposure to a variety of projects and industries. Competitive salary and benefits package. Perks & Benefits: Mentorship and guidance from experienced marketing professionals. Potential for future career growth and advancement within the organization. Opportunity to work closely with experienced professionals and learn from industry experts. Hands-on experience in various aspects of marketing. Compensatory Offs Leisure Trips and Entertainment Dine out on Project completion (at management discretion) Work from home Balance Allocation (Interested candidates can share their CV to aradhana@hungrybird.in or call on 9959417171.) Please furnish the below-mentioned details that would help us expedite the process. PLEASE MENTION THE RELEVANT POSITION IN THE SUBJECT LINE OF THE EMAIL. Example: KRISHNA, HR MANAGER, 7 YEARS, 20 DAYS NOTICE Name: Position applying for: Total experience: Notice period: Current Salary: Expected Salary: Thanks and Regards Aradhana +91 9959417171
Posted 2 months ago
5 - 8 years
7 - 10 Lacs
Hyderabad
Work from Office
Overview This role will work primarily as Data Engineer cum BI Developer with ROI Engine Insights Team where his/her role will be focused on driving market level priorities, understanding existing capability, enhancing and automating business reporting, existing data pipelines, creating new data pipelines to fuel stronger, faster and better performance insights for the ROI Engine Team. Responsibilities Connect multiple data sources through curated metrics and develop calculated metrics to focus on the key outcome and diagnostic measures. Execute market, portfolio, and brand level reporting of Marketing KPI performance (utilizing dashboards, templated decks, and reporting tools). Work with our agency partner to streamline and automate data collection, DQ checks and transform the data for data modelling intake. Support processes for output adherence and delivery to agreed scope in line with the agreed timelines, aligned global templates. Design, build, and maintain scalable data pipelines for collecting, storing, and processing large datasets from various sources (databases, APIs, etc.). Work closely with data analysts, data scientists, and business stakeholders to gather and integrate data from multiple sources for analysis. Develop and implement effective data models, data warehouses, and data marts to enable high-quality reporting and analytics. Develop and manage ETL (Extract, Transform, Load) processes to ensure the timely availability of clean, processed data for analytics and business intelligence. Monitor and enforce data quality standards, ensuring the integrity, consistency, and reliability of data pipelines. Maintain proper documentation for data pipelines, architecture, and data flows to facilitate knowledge sharing and maintainability. Qualifications - Analytics professional with 5 years+ experience. Exposure to CPG industry is preferred. - Education: Masters/Graduate in Engineering/Mathematics/Statistics or Marketing - Proficient with PowerPoint, Excel, Python, mySQL including ability to write complex logics - Proficient with PowerBI Dashboard creation is a must - Proficient in snowflake for Azure based data warehousing - Deep understanding of CPG industry business performance outputs and causal measures, their relationships, and how to bring business performance insights to life visually is a plus
Posted 2 months ago
5 - 10 years
0 - 1 Lacs
Hyderabad
Remote
Position: Snowflake Architect (Data Engineering Solutions) Experience: 5+ Years Location: Remote Work Type: Contract Overview: They are seeking an experienced Architect to join their Data Engineering team. In this role, this person will play a pivotal role in driving the design, implementation, and optimization of data solutions. The ideal candidate will have strong expertise in data pipeline tools such as DBT, DataVault (AutomateDV), Snowflake, Airflow, and GitLabs, with a proven ability to guide and collaborate with cross-functional teams. Additional experience in Microservices, Netsuite, or Salesforce will be a plus. Responsibilities: Lead and drive the design and architecture of data solutions, ensuring alignment with business goals and technical requirements. Oversee the implementation and automation of data pipelines and workflows, using tools such as DBT, DataVault (AutomateDV), Snowflake, Airflow, and GitLabs. Collaborate with engineering and business teams to gather requirements and ensure solutions meet functional and non-functional expectations. Provide mentorship and guidance to senior and junior team members on best practices, architecture, and development techniques. Evaluate and recommend new technologies, tools, and frameworks to enhance the team's capabilities. Conduct code reviews, implement robust testing strategies, and ensure high standards of quality and performance. Manage and optimize data storage and processing within Snowflake to ensure scalability, reliability, and cost-effectiveness. Work with DevOps and Infrastructure teams to automate and streamline data workflows and deployment processes. Required Skills and Experience: 5+ years of experience in data engineering or related fields, with 3+ years in an architecture or leadership role. Extensive experience with DBT, DataVault (AutomateDV), Snowflake, Airflow, and GitLabs. Strong experience in designing and building scalable, reliable, and efficient data pipelines and data storage solutions. Expertise in cloud platforms (especially Snowflake) and big data technologies. Strong communication skills and ability to collaborate with stakeholders at various levels. Ability to mentor and lead teams in building high-performance, maintainable data solutions. Preferred Skills (a plus, but not mandatory): Experience with Microservices architecture and development. Familiarity with enterprise applications like Netsuite and Salesforce. Strong background in agile methodologies and DevOps practices.
Posted 2 months ago
1 - 5 years
1 - 5 Lacs
Bengaluru
Work from Office
Data Support Engineer Location: Bangalore Experience 6+yrs Rate 30 LPA AMVikas POC:Swati Patil Key Responsibilities: Database Development & Support: Write and optimize SQL queries, stored procedures, and views for data retrieval and transformation. Develop and maintain data pipelines to support business intelligence and analytics requirements. Support SQL Server and Amazon Redshift environments for data storage, transformation, and analytics. Ensure data integrity, security, and quality across all database solutions. Operational Support: Monitor ETL logs and troubleshoot data pipeline issues to minimize downtime. Perform data validation and reconciliation to ensure data accuracy. Maintain Excel reports and updates as part of regular operational tasks. Development & Automation: Utilize Python for automation, data processing, and workflow enhancements. Work with AWS services (e.g., S3, Redshift, Glue) to implement cloud-based data solutions. Assist in maintaining and optimizing legacy PHP code for database interactions (preferred). Experience & Qualifications: Minimum 2 years of experience in database development, support, or data engineering roles. Strong SQL skills with experience in query optimization, stored procedures, and data provisioning. Hands-on experience with relational databases (SQL Server) and cloud data warehouses (Redshift). Python programming skills for automation and data transformation. AWS expertise in services like S3, Redshift, and Glue (preferred). Knowledge of Databricks and big data processing is a plus. Experience with data validation and reconciliation processes. Exposure to CI/CD, version control, and data governance best practices . Knowledge of PHP for database-related development and maintenance (preferred but not mandatory). Preferred Skills: Experience in business intelligence and analytics environments . Ability to analyze data and provide insights and recommendations . Understanding of ETL processes and data pipeline monitoring . Strong troubleshooting skills for database and ETL issues .
Posted 2 months ago
2 - 5 years
3 - 7 Lacs
Karnataka
Work from Office
EXP 4 to 6 yrs Location Any PSL Location Rate below 14$ JD - DBT/AWS Glue/Python/Pyspark Hands-on experience in data engineering, with expertise in DBT/AWS Glue/Python/Pyspark. Strong knowledge of data engineering concepts, data pipelines, ETL/ELT processes, and cloud data environments (AWS) Technology DBT, AWS Glue, Athena, SQL, Spark, PySpark Good understanding of Spark internals and how it works. Goot skills in PySpark Good understanding of DBT basically should be to understand DBT limitations and when it will end-up in model explosion Good hands-on experience in AWS Glue AWS expertise should know different services and should know how to configure them and infra-as-code experience Basic understanding of different open data formats Delta, Iceberg, Hudi Ability to engage in technical conversations and suggest enhancements to the current Architecture and design"
Posted 2 months ago
1 - 3 years
1 - 5 Lacs
Uttar Pradesh
Work from Office
* Working experience with Snowflake & ADF. * Experience in cloud based ETL implementations. * Good experience in working ETL tool and very good knowledge with SQL. * Creating of ETL Mappings to load data from files (& other feeds) and SQL server databases to Data Warehouse on cloud. * Create & execute SQL tasks. * Deploying ETL packages to SSIS production servers. * Understanding the SSIS packages, analyse queries in SSIS packages and build models using DBT. * Good Communication * Ability to work individually Primary Skills: Snowflake, DBT, ADF, Azure DevOps and good with SQL queries Secondary Skills: Any ETL experience (SSIS is preferred)
Posted 2 months ago
3 - 6 years
4 - 8 Lacs
Uttar Pradesh
Work from Office
Working experience with Snowflake & ADF. * Experience in cloud based ETL implementations. * Good experience in working ETL tool and very good knowledge with SQL. * Creating of ETL Mappings to load data from files (& other feeds) and SQL server databases to Data Warehouse on cloud. * Create & execute SQL tasks. * Deploying ETL packages to SSIS production servers. * Understanding the SSIS packages, analyse queries in SSIS packages and build models using DBT. * Good Communication * Ability to work individually
Posted 2 months ago
6 - 9 years
8 - 12 Lacs
Maharashtra
Work from Office
Description This is to support Apps 2cloud operations. 1. DB2, Snowflake, Sybase, and AWS RDS Installation, Configuration, and MaintenanceInstall, configure, and maintain DB2, Snowflake, Sybase, and AWS RDS databases across on-premises and cloud environments (AWS, Azure). Ensure efficient database configurations, regular backups, and performance monitoring for all systems. Ensure all databases are encrypted with hashicorp vault. 2. Snowpipe Data Ingestion ManagementAdminister and monitor Snowpipe for automated, real-time data ingestion into Snowflake from external sources (AWS S3, Azure Blob). Troubleshoot and optimize Snowpipe for continuous and efficient data loading, ensuring the timely availability of data. 3. Iceberg and External Table ManagementManage Iceberg tables for large-scale, high-performance data analytics within Snowflake, ensuring data is organized and query performance is optimized. Administer external tables in Snowflake to access and query data stored externally in cloud storage systems (e.g., AWS S3, Azure Blob Storage). Design and optimize external table access for cost-effective querying of large datasets. 4. AWS Data Migration Service (DMS) ManagementUtilize AWS DMS to manage data migrations from DB2 and Sybase into Snowflake or AWS RDS, ensuring smooth data transfers with minimal downtime. Monitor and troubleshoot AWS DMS replication tasks for high availability and data integrity during migrations. 5. Cloud Performance Optimization (AWS Azure)Continuously monitor and optimize the performance of Snowflake, DB2, Sybase, and AWS RDS databases in AWS and Azure using cloud-native tools (AWS CloudWatch, Azure Monitor). Leverage features like Snowflakes auto-scaling and AWS RDS read replicas to ensure databases can scale efficiently as workloads increase. 6. Backup and Recovery SolutionsImplement and manage comprehensive backup and disaster recovery strategies for DB2, Snowflake, Sybase, and AWS RDS environments. Test recovery processes regularly to ensure business continuity and minimize downtime in case of system failures. 7. Performance Tuning and Query OptimizationMonitor and analyze the performance of DB2, Snowflake, Sybase, and AWS RDS databases, addressing slow queries, indexing issues, and resource bottlenecks. Leverage Snowflakes clustering and partitioning features, as well as query optimization techniques, to improve performance on large datasets, including Iceberg tables. 8. Database Security and ComplianceEnsure database security for DB2, Snowflake, Sybase, and AWS RDS, implementing encryption, identity management, and access controls. Maintain compliance with data privacy regulations (e.g., GDPR, HIPAA) and ensure audit trails for all database activity. 9. Automation and ScriptingAutomate routine administrative tasks for DB2, Sybase, Snowflake, and AWS RDS using scripting (Python, Bash, PowerShell) and cloud automation tools (Terraform, AWS Lambda, Azure Automation). Automate Snowpipe data ingestion workflows and external table access in Snowflake for efficient, real-time data management. Utilize automation tools like Ansible, Terraform, or database-specific APIs to streamline database management processes 10. Cloud Monitoring and AlertsSet up proactive monitoring and alerting for Snowflake, DB2, Sybase, and AWS RDS environments using AWS CloudWatch, Azure Monitor, and Snowflakes native monitoring tools. Track real-time metrics such as CPU, memory, disk usage, Snowpipe data ingestion rates, and external table query performance. 11. Capacity Planning and Scalability Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility Yes Global Role Family To be defined Local Role Name To be defined Local Skills database administration;database Languages RequiredENGLISH Role Rarity To Be Defined
Posted 2 months ago
2 - 6 years
3 - 6 Lacs
Telangana, Maharashtra
Work from Office
Python Developer Exp- 5-9yrs Max salary 25lpa-28lpa Pune and Hyd (Hybrid -WFO) Fixed rate - FTE Python Data Integration Engineer :Are you a seasoned data engineer with a passion for both hands-on technical work? Do you thrive in an environment that values innovation, collaboration, and cutting-edge technologies? We are seeking an experienced Data Integration engineer to spearhead our data integration strategies and initiatives. The ideal candidate will possess deep technical expertise in Python programming, Snowflake data warehousing, AWS cloud services, Kubernetes (EKS), CI/CD methodologies, Apache Airflow, dbt, Kafka for real-time data streaming, and API development. This role is pivotal in driving the architecture, development, and maintenance of scalable and efficient data pipelines and integrations to support our analytics and business intelligence platforms.Role and Responsibilities:As the Data Integration Engineer, you will play a pivotal role in shaping the future of our data integration engineering initiatives. You will remain actively involved in the technical aspects of the projects. Your responsibilities will includeHands-On ContributionContinue to be hands-on with data integration engineering tasks, including data pipeline development, EL processes, and data integration. Be the go-to expert for complex technical challenges. Integrations ArchitectureDesign and implement scalable and efficient data integration architectures that meet business requirements. Ensure data integrity, quality, scalability, and security throughout the pipeline. Tool ProficiencyLeverage your expertise in Snowflake, SQL, Apache Airflow, AWS, API, CI/CD, DBT and Python to architect, develop, and optimize data solutions. Stay current with emerging technologies and industry best practices. Data QualityMonitor data quality and integrity, implementing data governance policies as needed. Cross-Functional CollaborationCollaborate with data science, data warehousing, analytics, and other cross-functional teams to understand data requirements and deliver actionable insights. Performance OptimizationIdentify and address performance bottlenecks within the data infrastructure. Optimize data pipelines for speed, reliability, and efficiency. Project ManagementOversee end-to-end project delivery, from requirements gathering to implementation. Ensure projects are delivered on time and within scope.QualificationsQualificationsBachelor's degree in Computer Science, Engineering, or related field. Advanced degree is a plus. 4 years of hands-on experience in python programming. 3 years of experience in data engineering with experience in SQL.Preferred Skills: Familiarity with cloud platforms, such as AWS or Azure. Demonstrated experience in designing and developing RESTful APIs. Preferred experience with Snowflake, AWS, Kubernetes (EKS), CI/CD practices, Apache Airflow, and dbt. Good experience in full-stack development Excellent analytical, problem-solving, and decision-making abilities. Strong communication skills, with the ability to articulate technical concepts to non-technical stakeholders. A collaborative mindset, with a focus on team success.
Posted 2 months ago
8 - 12 years
9 - 19 Lacs
Bengaluru
Hybrid
Role & Job Description • Experienced Data management specialist responsible for developing, overseeing, organizing, storing, and analyzing data and data systems • Participate in all aspects of the software development lifecycle for Snowflake solutions, including planning, requirements, development, testing, and quality assurance • Work in tandem with our engineering team to identify and implement the most optimal solutions • Ensure platform performance, uptime, and scale, maintaining high standards for code quality and thoughtful design • Troubleshoot incidents, identify root causes, fix and document problems, and implement preventive measures • Able to manage deliverables in fast paced environments Areas of Expertise • At least 8-10 years of experience designing and development of data solutions in enterprise environment • At least 5+ years experience on Snowflake Platform • Strong hands on SQL and Python development • Experience with designing and development data warehouses in Snowflake • A minimum of three years experience in developing production-ready data ingestion and processing pipelines using Spark, Scala • Strong hands-on experience with Orchestration Tools e.g. Airflow, Informatica, Automic • Good understanding on Metadata and data lineage • Hands on knowledge on SQL Analytical functions • Strong knowledge and hands-on experience in Shell scripting, Java Scripting • Able to demonstrate experience with software engineering practices including CI/CD, Automated testing and Performance Engineering. • Good understanding and exposure to Git, Confluence and Jira • Good problem solving and troubleshooting skills. • Team player, collaborative approach and excellent communication skills responsibilities Preferred candidate profile Perks and benefits
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Snowflake has become one of the most sought-after skills in the tech industry, with a growing demand for professionals who are proficient in handling data warehousing and analytics using this cloud-based platform. In India, the job market for Snowflake roles is flourishing, offering numerous opportunities for job seekers with the right skill set.
These cities are known for their thriving tech industries and have a high demand for Snowflake professionals.
The average salary range for Snowflake professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
A typical career path in Snowflake may include roles such as: - Junior Snowflake Developer - Snowflake Developer - Senior Snowflake Developer - Snowflake Architect - Snowflake Consultant - Snowflake Administrator
In addition to expertise in Snowflake, professionals in this field are often expected to have knowledge in: - SQL - Data warehousing concepts - ETL tools - Cloud platforms (AWS, Azure, GCP) - Database management
As you explore opportunities in the Snowflake job market in India, remember to showcase your expertise in handling data analytics and warehousing using this powerful platform. Prepare thoroughly for interviews, demonstrate your skills confidently, and keep abreast of the latest developments in Snowflake to stay competitive in the tech industry. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2