Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5 - 10 years
0 - 3 Lacs
Pune, Coimbatore, Mumbai (All Areas)
Hybrid
We are hosting an Open Walk-in Drive in Bangalore on 29th March [Saturday] 2025. Details of the Walk-in Drive: Date: 29th March [Saturday] 2025 Experience 5 years to 10 years Time: 9.30 AM to 4:00 PM Point of Contact: Aishwarya G / aishwaryag5@hexaware.com Venue: Hexaware Technologies Ltd, Shantiniketan, 11th Floor, Crescent - 2 Prestige, Whitefield Main Rd, Mahadevapura, Bengaluru, Karnataka 560048 Key Skills and Experience: Must have 5 - 10 years of experience in Data warehouse, ETL, BI projects Must have atleast 4+ years of experience in Snowflake Expertise in Snowflake architecture is must. Must have atleast 3+ years of experience and strong hold in Python/PySpark Must have experience implementing complex stored Procedures and standard DWH and ETL concepts Proficient in Oracle database, complex PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Good to have experience with AWS services and creating DevOps templates for various AWS services. Experience in using Github, Jenkins Good communication and Analytical skills Snowflake certification is desirable What to Bring: Updated resume Photo ID, Passport size photo Mention "Aishwarya G" at the top of your resume. How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at aishwaryag5@hexaware.com We look forward to meeting you and exploring the potential of having you as a valuable member of our team. ********* less than 4 years of total experience will not be Screen selected to attend the interview***********
Posted 2 months ago
8 - 10 years
16 - 30 Lacs
Delhi NCR, Gurgaon, Noida
Hybrid
WHAT YOULL DO • Design, document & implement the data pipelines to feed data models for subsequent consumption in Snowflake using dbt, and airflow. • Ensure correctness and completeness of the data being transformed via engineering pipelines for end consumption in Analytical Dashboards. • Actively monitor and triage technical challenges in critical situations that require immediate resolution. • Evaluate viable technical solutions and share MVPs or PoCs in support of the research • Develop relationships with external stakeholders to maintain awareness of data and security issues and trends • Review work from other tech team members and provide feedback for growth • Implement Data Performance and data security policies that align with governance objectives and regulatory requirements • Effectively mentor and develop your team members YOURE GOOD AT You have experience in data warehousing, data modeling, and the building of data engineering pipelines. You are well versed in data engineering methods, such as ETL and ELT techniques through scripting and/or tooling. You are good at analyzing performance bottlenecks and providing enhancement recommendations; you have a passion for customer service and a desire to learn and grow as a professional and a technologist. • Strong analytical skills related to working with structured, semi-structured, and unstructured datasets. • Collaborating with product owners to identify requirements, define desired outcomes, and deliver trusted results. • Building processes supporting data transformation, data structures, metadata, dependency, and workload management. • In this role, SQL is heavily focused. An ideal candidate must have hands-on experience with SQL database design. Plus, Python. • Demonstrably deep understanding of SQL (level: advanced) and analytical data warehouses (Snowflake preferred). • Demonstrated ability to write new code i.e., well-documented and stored in a version control system (we use GitHub & Bitbucket) • Extremely talented in applying SCD, CDC, and DQ/DV framework. • Familiar with JIRA & Confluence. • Must have exposure to technologies such as dbt, Apache airflow, and Snowflake. • Desire to continually keep up with advancements in data engineering practices. • Knowledge of AWS cloud, and Python is a plus. YOU BRING (EXPERIENCE & QUALIFICATIONS) Essential Education • Bachelor's degree or equivalent combination of education and experience. • Bachelor's degree in information science, data management, computer science or related field preferred. • Essential Experience & Job Requirements • 7+ years of IT experience with a major focus on data warehouse/database-related projects • Must have exposure to technologies such as dbt, Apache Airflow, and Snowflake. • Experience in other data platforms: Oracle, SQL Server, MDM, etc • Expertise in writing SQL and database objects - Stored procedures, functions, and views. Hands-on experience in ETL/ELT and data security, SQL performance optimization and job orchestration tools and technologies e.g., dbt, APIs, Apache Airflow, etc. • Experience in data modeling and relational database design • Well-versed in applying SCD, CDC, and DQ/DV framework. • Demonstrate ability to write new code i.e., well-documented and stored in a version control system (we use GitHub & Bitbucket) • Good to have experience with Cloud Platforms such as AWS, Azure, GCP and Snowflake • Good to have strong programming/ scripting skills (Python, PowerShell, etc.) • Experience working with agile methodologies (Scrum, Kanban) and Meta Scrum with cross-functional teams (Product Owners, Scrum Master, Architects, and data SMEs) • Excellent written, and oral communication and presentation skills to present architecture, features, and solution recommendations
Posted 2 months ago
7 - 11 years
12 - 19 Lacs
Pune
Work from Office
Snowflake Developer Job Description: 5+ years ETL development and understanding of data engineering concepts. 3+ years using Snowflake as part of the ELT deployment. Develop and maintain ETL processes to load data from various sources into Snowflake. Proficiency in MS SQL and experience with data modelling and query optimization Monitor and optimize Snowflake performance, including query tuning, clustering, and partitioning strategies. Excellent communication skills, with the ability to work collaboratively with cross-functional teams. Knowledge of DBT (data building tool) Nice to Have: Experience leading and managing database migration projects Experience working in financial services industry Experience with data integration tools such as SSIS, SSAS is a plus. Familiarity with CI/CD processes and DevOps practices for data pipelines. Snowflake certification (e.g., SnowPro Core) is preferred. AWS / Cloud experience a plus
Posted 2 months ago
5 - 10 years
17 - 32 Lacs
Bengaluru
Work from Office
Job Description: Snowflake Developer with DBT EXP: 5-11 Years Request you to share below Details with Updated Resume ASAP to himabindhu.g@jobworld.jobs Total Experience: Relevant EXP : Current Company: (Please mention Payroll If Contract) Notice Period: (Mention LWD if Serving Notice Period) CCTC: (Please mention Fixed + Variable) ECTC: (Please mention Fixed + Variable) Holding Any Offers: (Please mention Fixed + Variable) Mention DOJ Also. Holding Offer CTC: Educational Qualification: Expected DOJ : Date of Birth as per Doc: Current Location: Preferred Location: Regards, Himabindu.
Posted 2 months ago
5 - 9 years
8 - 12 Lacs
Panchkula, Bengaluru, Gurgaon
Work from Office
We are seeking an experienced Senior Data Ops Engineer with 5-7 years of hands-on expertise in managing and optimizing data infrastructure, processes, and tools. This role is critical for driving efficiency, scalability, and reliability across our data ecosystems. As a seasoned professional, you will lead initiatives to enhance data pipelines, ensure data quality, and support advanced analytics and operational needs. Key Responsibilities: Lead Database Administration: Oversee and optimize relational databases, including SQL Server (preferred), Oracle, and PostgreSQL, ensuring performance, security, and scalability. Cloud Infrastructure Management: Architect and manage multi-cloud environments, with a focus on AWS (preferred), Azure, and GCP, to meet business needs. Orchestrate Data Workflows: Design and implement robust workflows using tools like Airflow, Prefect, or Dagster to handle complex data pipelines. Data Ingestion and Transformation: Implement and manage tools like dbT and FiveTran to streamline data extraction, transformation, and loading processes. Ensure Data Quality: Develop and enforce data quality standards using tools such as SODA, Great Expectations, or Atlan. Cloud Data Warehousing Expertise: Build and maintain scalable, secure, and performant data warehouses, especially with Snowflake (preferred) and Databricks. Infrastructure as Code (IaC): Implement and manage IaC using Terraform (preferred), CloudFormation, or Cloud Foundry for seamless infrastructure provisioning. Enhance Observability: Leverage monitoring tools like Datadog, Prometheus, or Grafana for real-time data observability and infrastructure monitoring. Support DevOps Operations: Utilize tools like Ansible, Chef, ServiceNow, and PagerDuty to enhance deployment, monitoring, and incident management processes. Mentorship and Collaboration: Guide junior engineers and collaborate across teams to establish and maintain best practices in data operations. Optimize Processes: Continuously evaluate and improve operational efficiency and pipeline performance using Python and SQL.
Posted 2 months ago
8 - 13 years
10 - 20 Lacs
Hyderabad
Work from Office
Job Title : AWS Data Engineer Job Description: As an AWS Data Engineer at Niha Technologies, you will play a crucial role in the development, maintenance, and optimization of our data infrastructure. You will be responsible for designing, building, and maintaining data pipelines and ETL processes on the AWS platform. Your work will directly impact our ability to analyze, visualize, and gain insights from our data. Key Responsibilities: Develop and maintain data pipelines using AWS services, Python, DBT and Snowflake. Collaborate with data analysts and data scientists to understand data requirements and implement solutions. Optimize data pipelines for performance, scalability, and cost-efficiency. Ensure data security, compliance, and data governance best practices are followed. Monitor and troubleshoot data pipeline issues, ensuring data reliability and accuracy. Collaborate with cross-functional teams to integrate data into various applications and services. Stay updated with the latest AWS technologies and best practices to recommend and implement improvements. Requirements: Bachelors degree in computer science, Data Engineering, or a related field (or equivalent work experience). Proven experience as a Data Engineer with a strong focus on AWS technologies. Proficiency in Python for scripting and data manipulation. Hands-on experience with AWS services, particularly AWS Glue, AWS Lambda, AWS S3, AWS RDS Aurora, AWS CloudWatch, AWS ECS, AWS SNS, AWS SQS, AWS Athena, AWS DynamoDB, AWS Step Functions and AWS DMS. Experience with Snowflake data warehouse and AWS Database Migration Service (DMS). Strong SQL skills for data transformation and manipulation. Understanding of data modelling, ETL processes, and data warehousing concepts. Understanding of GIT, Gitlab CICD, Docker and Terraform. Knowledge of data security, compliance, and governance best practices. Excellent problem-solving and communication skills. Strong team player with the ability to collaborate effectively across teams. AWS certifications are an additional certificate.
Posted 2 months ago
9 - 14 years
25 - 30 Lacs
Gurgaon
Remote
We are looking for a Data Architect & Business Intelligence Expert who will be responsible for designing and implementing enterprise-level data architecture solutions. The ideal candidate will have extensive experience in data warehousing, data modeling, and BI frameworks , with a strong focus on Salesforce, Informatica, DBT, IICS, and Snowflake . Key Responsibilities: Design and implement scalable and efficient data architecture solutions for enterprise applications. Develop and maintain robust data models that support business intelligence and analytics. Build data warehouses to support structured and unstructured data storage needs. Optimize data pipelines, ETL processes, and real-time data processing. Work with business stakeholders to define data strategies that support analytics and reporting. Ensure seamless integration of Salesforce, Informatica, DBT, IICS, and Snowflake into the data ecosystem. Establish and enforce data governance, security policies, and best practices . Conduct performance tuning and optimization for large-scale databases and data processing systems. Provide technical leadership and mentorship to development teams. Key Skills & Requirements: Strong experience in data architecture, data warehousing, and data modeling . Hands-on expertise with Salesforce, Informatica, DBT, IICS, and Snowflake . Deep understanding of ETL pipelines, real-time data streaming, and cloud-based data solutions . Experience in designing scalable, high-performance, and secure data environments . Ability to work with big data frameworks and BI tools for reporting and visualization. Strong analytical, problem-solving, and communication skills.
Posted 2 months ago
8 - 11 years
10 - 20 Lacs
Bengaluru
Work from Office
• Minimum 6 years experience within Data Engineering with minimum 2 years in a similar role • Experience with Databrick, DBT, Python and Pyspark is preferrable but not mandatory • Comfortable working within conceptual, logical and physical data model following Data Vault 2.0 and Kimball data modelling methodologies • Strong experience working in Data Warehousing and Data Analytics projects including data acquisition, data transformation, and data migration projects • Strong experience in SQL and Redshift • Experience in developing and optimising data pipelines using control-m and sql • Experience in data migration, building, enhancing, and maintaining cloud data warehousing • Experience in performance tuning and optimising databases and SQL queries for analysis and reporting • Ability to rapidly acquire an understanding of complex business problems/requirements to develop solutions and designs, regardless of existing areas of expertise or specialisation • Strong skills and experience in cloud environments, especially AWS • Experience supporting high availability production systems is desirable
Posted 3 months ago
5 - 8 years
7 - 10 Lacs
Bengaluru
Work from Office
• Minimum 6 years experience within Data Engineering with minimum 2 years in a similar role • Experience with Databrick, DBT, Python and Pyspark is preferrable but not mandatory • Comfortable working within conceptual, logical and physical data model following Data Vault 2.0 and Kimball data modelling methodologies • Strong experience working in Data Warehousing and Data Analytics projects including data acquisition, data transformation, and data migration projects • Strong experience in SQL and Redshift • Experience in developing and optimising data pipelines using control-m and sql • Experience in data migration, building, enhancing, and maintaining cloud data warehousing • Experience in performance tuning and optimising databases and SQL queries for analysis and reporting • Ability to rapidly acquire an understanding of complex business problems/requirements to develop solutions and designs, regardless of existing areas of expertise or specialisation • Strong skills and experience in cloud environments, especially AWS • Experience supporting high availability production systems is desirable
Posted 3 months ago
5 - 8 years
7 - 10 Lacs
Bengaluru
Work from Office
Build data models, ETLs, Tableau dashboards & reports, standard data sources using tableau desktop tool Tableau access management & administration Required Experience: at least 5+ years of experience in DW & BI reporting using various tools & technologies Strong reporting skills with Tableau desktop, Tableau server Strong SQL skills. Band: U3 Competency: Data & Analytics
Posted 3 months ago
5 - 8 years
7 - 10 Lacs
Bengaluru
Work from Office
Min 4+ Years of experience in data engineering or related roles. • Strong expertise in data warehousing concepts and best practices. Experience with Snowflake data warehousing • Strong proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. • Self-motivated and eager to take ownership of projects. • Strong problem solving skills and attention to detail. • Excellent communication skills for effective collaboration. • Experience with Python for scripting and automation tasks is a strong plus Band: U3 Competency: Data & Analytics
Posted 3 months ago
8 - 11 years
27 - 32 Lacs
Pune
Work from Office
Tester & Test Lead - ETL+ Data Data testing experience mandatory. SQL proficiency in Big Query is Mandatory. Experience in GCP Data Form+ Great Expectation + DBT is preferable. Experience in testing data validation scenarios and Data ingestion+ pipelines+ and transformation processes. Demonstrated experience of hands-on Issue and Defect Management Experience of Agile Software Development & Testing methods for deployment in cloud environments 4 to 6 years of Testing experience for Test Engineer and 6+ years of Testing experience for Test Lead profiles. Band: U4 Competency: Data & Analytics
Posted 3 months ago
8 - 11 years
30 - 35 Lacs
Hyderabad
Work from Office
• Minimum 6 years experience within Data Engineering with minimum 2 years in a similar role • Experience with Databrick, DBT, Python and Pyspark is preferrable but not mandatory • Comfortable working within conceptual, logical and physical data model following Data Vault 2.0 and Kimball data modelling methodologies • Strong experience working in Data Warehousing and Data Analytics projects including data acquisition, data transformation, and data migration projects • Strong experience in SQL and Redshift • Experience in developing and optimising data pipelines using control-m and sql • Experience in data migration, building, enhancing and maintaining cloud data warehousing • Experience in performance tuning and optimising databases and SQL queries for analysis and reporting • Ability to rapidly acquire an understanding of complex business problems/requirements to develop solutions and designs, regardless of existing areas of expertise or specialisation • Strong skills and experience in cloud environments, especially AWS • Experience supporting high availability production systems is desirable What you'll do • Implement and maintain Datawarehouse platform and data pipelines suitable for Data Warehousing and reporting use cases • Build data assets that are aligned to IET data strategy and architectural roadmaps • Promote and champion Data Platform and Engineering ways of working via continuous build, continuous integration and continuous delivery • Support detailed solution design, data modelling, estimation and specification process • Promote test driven development and automated unit test frameworks within the teams • Promote and champion DevOps (CI/CD) practices using Bitbucket and Jenkins • Coordination of deployment and release activities for the team • Develop data engineering standards, implementation patterns and how to guides • Challenge the development team on their delivery of quality, unit test coverage and assistance in the creation of development artefacts • Contribute to risk analysis and utilise that information to plan development activities • Identify technical and process improvements that will benefit the business • Optimising code and ensuring we are getting the best out of the Data Platform • Foster a safe, innovative and results driven culture
Posted 3 months ago
5 - 8 years
8 - 10 Lacs
Hyderabad
Work from Office
• Minimum 6 years experience within Data Engineering with minimum 2 years in a similar role • Experience with Databrick, DBT, Python and Pyspark is preferrable but not mandatory • Comfortable working within conceptual, logical and physical data model following Data Vault 2.0 and Kimball data modelling methodologies • Strong experience working in Data Warehousing and Data Analytics projects including data acquisition, data transformation, and data migration projects • Strong experience in SQL and Redshift • Experience in developing and optimising data pipelines using control-m and sql • Experience in data migration, building, enhancing and maintaining cloud data warehousing • Experience in performance tuning and optimising databases and SQL queries for analysis and reporting • Ability to rapidly acquire an understanding of complex business problems/requirements to develop solutions and designs, regardless of existing areas of expertise or specialisation • Strong skills and experience in cloud environments, especially AWS • Experience supporting high availability production systems is desirable What you'll do • Implement and maintain Datawarehouse platform and data pipelines suitable for Data Warehousing and reporting use cases • Build data assets that are aligned to IET data strategy and architectural roadmaps • Promote and champion Data Platform and Engineering ways of working via continuous build, continuous integration and continuous delivery • Support detailed solution design, data modelling, estimation and specification process • Promote test driven development and automated unit test frameworks within the teams • Promote and champion DevOps (CI/CD) practices using Bitbucket and Jenkins • Coordination of deployment and release activities for the team • Develop data engineering standards, implementation patterns and how to guides • Challenge the development team on their delivery of quality, unit test coverage and assistance in the creation of development artefacts • Contribute to risk analysis and utilise that information to plan development activities • Identify technical and process improvements that will benefit the business • Optimising code and ensuring we are getting the best out of the Data Platform • Foster a safe, innovative and results driven culture
Posted 3 months ago
9 - 14 years
30 - 35 Lacs
Chennai, Pune, Bengaluru
Hybrid
Position: Integration Architect (DBT+Snowflake) Location: Pune/Chennai/Nagpur/Bengaluru Purpose of the Position: As a Senior Data Integration Developer/ Architect (DBT), this role seeks candidates passionate about specialized skills in Snowflake technology and features. You will be instrumental in assisting our clients by developing models that facilitate their advancement in utilizing Snowflake effectively. Key Result Areas and Activities: Expertise and Knowledge Sharing: Develop and share expertise in DBT & Snowflake Data Modelling and Development. Actively mine and disseminate organizational experience and expertise across teams and clients. 2.Support and Collaboration: Support Cloud and Data Engineering COE initiatives. Collaborate with management to understand and align with company objectives. 3.Real-Time Data and Performance: Ensure DBT solutions are correctly built for collecting real-time data. Perform and deliver effectively in large and complex environments. 4 .Pipeline and Architecture Design: Design, build, test, and maintain Snowflake architectures and data pipelines. 5.Compliance and Security: Ensure compliance with data governance and security policies. Must have: Expertise in Snowflake architecture, understanding of models, cloud platforms integration with snowflake ETL Development, SQL Scripting, working knowledge of stored procedures Proficiency in designing and maintaining data warehouses and data marts. Strong skills in ETL processes and tools (e.g., Informatica, Talend, Snaplogic). Strong problem-solving skills and the ability to work effectively in a collaborative team environment. Experience working on Datawarehouse/ETL projects. 4+ years of Snowflake ETL experience and 3+ DBT experience or equivalent Experience with cloud data platforms (e.g., AWS, Azure) Qualifications: Overall work experience of more than 8 years with minimum of 3-4 years experience on Snowflake Development, 2 years of DBT Degree in Computer Science, Software Engineering, or a related field
Posted 3 months ago
5 - 10 years
13 - 23 Lacs
Bengaluru
Remote
DATA Engineer with Snowflake,Tableau, DBT
Posted 3 months ago
6 - 11 years
25 - 35 Lacs
Mumbai Suburbs, Thane, Mumbai (All Areas)
Work from Office
Experience: 6 to 12 years. Data building tool (DBT) experience is mandatory. Experience of ETL/ELT processes, data modelling, change data capture (CDC), SQL, Airbyte, Snowflake, Cloud computing skills is required.
Posted 3 months ago
3 - 8 years
10 - 20 Lacs
Pune
Work from Office
Location: Pune Exp: 3-9 Years We are looking to hire a Data Engineer with strong hands-on experience in SQL, ETL, Pyspark, GCP, Required Past Experience: 3+ Years experience in ETL pipelines experience along with any GCP cloud experience along with pyspark. Experience in Shorked on at least one development project from ETL Perspective. File Processing usiell/Python Scripting Hand-On Experience to write Business Logic SQL or PL/SQL ETL Testing and Troubleshooting Good to have experience on Building a Cloud ETL PipeLine Hands-on experience in Code Versioning Tools like Git , SVN.. Good Knowledge of Code Deployment Process and Documentation Required Skills and Abilities: Mandatory Skills - Hands-on and deep experience working in ETL , GCP cloud (AWS/ Azure/ GCP) , Pyspark Secondary Skills - Strong in SQL Query and Shell Scripting Better Communication skill to understand business requirements from SME. Basic knowledge of data modeling Good Understanding of E2E Data Pipeline and Code Optimization Hands on experience in Developing ETL PipeLine for heterogeneous sources Good to have experience on Building a Cloud ETL PipeLine
Posted 3 months ago
3 - 7 years
6 - 13 Lacs
Chennai, Pune, Bengaluru
Work from Office
We are delighted about the to inform about the Job for Data Engineer with reputed MNC Bangalore/Pune/Chennai/Gurgaon. If interested kindly share the resume with the below mentioned details to anitha.m@aligngroup.in will reach out asap.. fill the details and share the resume T Exp: R Exp: C CTC: E CTC: NP if serving NP LWD: Any offers: PAN number or PAN COPY: JD: Role & responsibilities JOb Description: MS SQL, Azure Databricks experience Implement and manage data models in DBT Scrptiing _Python all (Plus) If interested kindly share the resume with the below mentioned details to anitha.m@aligngroup.in will reach out asap..
Posted 3 months ago
3 - 8 years
14 - 24 Lacs
Pune, Bengaluru, Hyderabad
Hybrid
Role & responsibilities Job Title: Snowflake Data Engineer (Senior Developer/Technical Lead) Location: Bangalore, Hyderabad, Pune, Noida, Kolkata Work Experience: 3 to 12 years Role & Required Skills: Proven experience in Snowflake. Good experience in SQL and Python. Experience in Data Warehousing. Experience in Data migration from SQL to Snowflake. AWS experience is nice to have. Good communication skills. Responsibilities: Implement business and IT data requirements through new data strategies and designs across all data platforms (relational, dimensional, and NoSQL) and data tools (reporting, visualization, analytics). Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models. Define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models. Identify the architecture, infrastructure, and interfaces to data sources, tools supporting automated data loads, security concerns, analytic models, and data visualization. Hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks. Analyze and translate business needs into long-term solution data models. Evaluate existing data systems. Work with the development team to create conceptual data models and data flows. Develop best practices for data coding to ensure consistency within the system. Review modifications of existing systems for cross-compatibility. Implement data strategies and develop physical data models. Update and optimize local and metadata models. Evaluate implemented data systems for variances, discrepancies, and efficiency. Troubleshoot and optimize data systems. Roles & Responsibilities Role & Required Skills: Proven experience in Snowflake. Good experience in SQL and Python. Experience in Data Warehousing. Experience in Data migration from SQL to Snowflake. AWS experience is nice to have. Good communication skills. Preferred candidate profile Perks and benefits
Posted 3 months ago
8 - 20 years
25 - 45 Lacs
Pune, Delhi NCR, Hyderabad
Work from Office
Principal accountabilities: Ensure the solutions developed and deployed by you & your Engineering team are fit for purpose and GDPR compliant, meet the business requirements, adhere to quality standards, and deliver the intended value. Ensure all developments delivered by you & your Engineering team follow the agreed standards and release management processes. Responsible for evolving these standards in an environment of continuous change facilitating team sessions that seek to improve them and keep them current & compliant. Identify issues and risks with the solutions that you create & those built by your team. Lead their resolution, providing support to colleagues in your team as required. Provide input into team & project planning activities & work within an agile delivery framework as part of a Scrum team. Facilitate the refinement of the tasks in the Product Backlog & guide & support your team as you break deliveries down into technical data engineering components. Convert user stories in to technical, quality, testing & documentation tasks in the chosen work-flow management tool. Liaise with the end customers, Architect & Product Owner to translate business goals into compliant specifications that facilitate the delivery of the technical solution & can be used by any engineer. Participate in high level Epic refinement sessions with the Scrum Team, to ensure that they are understood & achievable by the engineering team. Provide mentorship & support to junior & mid-level data engineers in your team & be a role model to colleagues across the Data Ops function. Treating Customers Fairly – Ensure that our application estate has core values embedded around ensuring value is provided fairly to all of our customers. Manage the team in accordance with the Company’s policies and procedures, including the Partnership Agreement, so that the team’s business objectives are achieved consistently Qualifications, knowledge and skills: Qualifications: Relevant professional qualifications in Data Engineering or related specialism Knowledge: Highly proficient across the domains of data engineering, including ELT/ETL, metadata management, data integration, data management in transit and at rest & data streaming. Uses Kimball modelling techniques to design databases & data warehousing solutions that conform to the standards adopted by DataOps & shows other how to do this. Fully competent in the cloud data warehousing paradigms specific to our platform, with a good awareness of alternatives. Uses modern project delivery methodologies to plan, lead & deliver solutions (Agile, Waterfall etc) & manages team resource in partnership with the Product Owners. Expert in one of SQL or python and proficient in the other. Has expert & detailed working knowledge of the tools & platforms used by the Engineering Team. dbt Cloud, Snowflake, Azure Experience: Demonstrates logical and lateral thinking considering all angles in their decision making. Supports others to do this. Uses technology to solve business problems & has a proven track record of delivering data engineering solutions alone and as part of a team. Uses Kimball dimensional data warehouse techniques to design & build data solutions & demonstrates to others how to do this. Identifies opportunities for innovation and continuous improvement in existing services by understanding current market offerings in the data engineering space. 4+ years of coal-face data engineering experience using the tools adopted by DataOps; Snowflake, Azure Synapse/Data Factory, dbt Cloud, Azure DevOps. 10+ years using SQL Leadership: Ability to communicate to all levels of stakeholders to demonstrate complex data engineering issues to non-technical stakeholders & technical colleagues alike. Creates an inspiring vision and clear direction that nurtures an open and honest & collaborative team ethos. Recognises and appreciates the efforts of others. A mentor for engineers, demonstrating the facilitation of team activities, supporting technical issues, offering support, development guidance and coaching to less experienced engineers and conducting regular touch-points with your team
Posted 3 months ago
4 - 7 years
10 - 32 Lacs
Pune, Delhi NCR, Hyderabad
Work from Office
Roles and Responsibilities : Skilled Azure Data Engineer with expertise in DBT, Snowflake, ETL processes, and data modeling. The practitioner should have a strong background in data engineering, cloud platforms, and data transformation. Hands on experience to implement the best practices for data storage, security, and governance. Design, develop, and maintain scalable data pipelines using Azure Data Factory (ADF) and DBT. Implement ETL processes to extract, transform, and load data from various sources into Snowflake. Good to have if practitioner have Python experience.
Posted 3 months ago
4 - 7 years
6 - 9 Lacs
Bengaluru
Work from Office
Job Responsibilities Assist in the design and implementation of Snowflake-based analytics solution(data lake and data warehouse) on AWS Requirements definition, source data analysis and profiling, the logical and physical design of the data lake and data warehouse as well as the design of data integration and publication pipelines Develop Snowflake deployment and usage best practices Help educate the rest of the team members on the capabilities and limitations of Snowflake Build and maintain data pipelines adhering to suggested enterprise architecture principles and guidelines Design, build, test, and maintain data management systems Work in sync with internal and external team members like data architects, data scientists, data analysts to handle all sorts of technical issue Act as technical leader within the team Working in Agile/Lean model Deliver quality deliverables on time Translating complex functional requirements into technical solutions. EXPERTISE AND QUALIFICATIONS Essential Skills, Education and Experience Should have a B.E. B.Tech. MCA or equivalent degree along with 4-7 years of experience in Data Engineering Strong experience in DBT concepts like Model building and configurations, incremental load strategies, macro, DBT tests. Strong experience in SQL Strong Experience in AWS Creation and maintenance of optimum data pipeline architecture for ingestion, processing of data Creation of necessary infrastructure for ETL jobs from a wide range of data sources using Talend, DBT, S3, Snowflake. Experience in Data storage technologies like Amazon S3, SQL, NoSQL Data modeling technical awareness Experience in working with stakeholders working in different time zones Good to have AWS data services development experience. Working knowledge on using Bigdata technologies. Experience in collaborating data quality and data governance team. Exposure to reporting tools like Tableau Apache Airflow, Apache Kafka (nice to have) Payments domain knowledge CRM, Accounting, etc. in depth understanding Regulatory reporting exposure Other skills Good Communication skills Team Player Problem solver Willing to learn new technologies, share your ideas and assist other team members as needed Strong analytical and problem-solving skills; ability to define problems, collect data, establish facts, and draw conclusions.
Posted 3 months ago
12 - 18 years
35 - 40 Lacs
Chennai, Bengaluru, Trivandrum
Work from Office
Job Role: Sr. Data Architect - Snowflake Experience: 12+ years Notice period: Immediate Location: Trivandrum/Bangalore/Chennai We are looking for 12+years experienced candidates for this role: Job Description A minimum of 12+ years of experience in data engineering, encompassing the development and scaling of data warehouse and data lake platforms. Working hours - 8 hours, 12 PM Afternoon - 9 PM Responsibilities include: • Lead the design and architecture of data solutions leveraging Snowflake, ensuring scalability, performance,and reliability. • Collaborate with stakeholders to understand business requirements and translate them into technicalspecifications and data models. • Develop and maintain data architecture standards, guidelines, and best practices, including datagovernance principles and DataOps methodologies. • Oversee the implementation of data pipelines, ETL processes, and data governance frameworks withinSnowflake environments. • Provide technical guidance and mentorship to data engineering teams, fostering skill development and knowledge sharing. • Conduct performance tuning and optimization of Snowflake databases and queries.• Stay updated on emerging trends and advancements in Snowflake, cloud data technologies, data governance, and DataOps practices. Certifications: • Bachelors or Masters degree in Computer Science, Information Systems, or related field. • Certifications related Snowflake (e.g., SnowPro core/Snowpro advanced Architect/Snowpro advance Data Engineer ) are desirable but not mandatory. Primary Skills: • Snowflake experience, Data Architecture experience, ETL process experience, Large Data migration solutioning experience • Extensive experience in designing and implementing data solutions using Snowflake, DBT • Proficiency in data modeling, schema design, and optimization within Snowflake environments • Strong understanding of cloud data warehousing concepts and best practices, particularly with Snowflake • Expertise in Python/Java/Scala, SQL, ETL processes, and data integration techniques, with a focus on Snowflake • Familiarity with AWS, Azure, and GCP • Demonstrated experience in implementing data governance frameworks and DataOps practices• Working experience in SAP environments • Familiarity with real-time streaming technologies and Change Data Capture (CDC) mechanisms• Knowledge of data governance principles and DataOps methodologies • Proven track record of architecting and delivering complex data solutions in cloud platforms/Snowflake Secondary Skills: • Experience with data visualization tools (e.g., Tableau, Power BI) is a plus • Knowledge of data security and compliance standards • Excellent communication and presentation skills, with the ability to convey complex technical concepts to juniors and non-technical stakeholders • Strong problem-solving and analytical skills • Ability to work effectively in a collaborative team environment and lead cross-functional initiatives
Posted 3 months ago
6 - 8 years
30 - 32 Lacs
Bengaluru
Work from Office
We are seeking an experienced Amazon Redshift Developer / Data Engineer to design, develop, and optimize cloud-based data warehousing solutions. The ideal candidate should have expertise in Amazon Redshift, ETL processes, SQL optimization, and cloud-based data lake architectures. This role involves working with large-scale datasets, performance tuning, and building scalable data pipelines. Key Responsibilities: Design, develop, and maintain data models, schemas, and stored procedures in Amazon Redshift. Optimize Redshift performance using distribution styles, sort keys, and compression techniques. Build and maintain ETL/ELT data pipelines using AWS Glue, AWS Lambda, Apache Airflow, and dbt. Develop complex SQL queries, stored procedures, and materialized views for data transformations. Integrate Redshift with AWS services such as S3, Athena, Glue, Kinesis, and DynamoDB. Implement data partitioning, clustering, and query tuning strategies for optimal performance. Ensure data security, governance, and compliance (GDPR, HIPAA, CCPA, etc.). Work with data scientists and analysts to support BI tools like QuickSight, Tableau, and Power BI. Monitor Redshift clusters, troubleshoot performance issues, and implement cost-saving strategies. Automate data ingestion, transformations, and warehouse maintenance tasks. Required Skills & Qualifications: 6+ years of experience in data warehousing, ETL, and data engineering. Strong hands-on experience with Amazon Redshift and AWS data services. Expertise in SQL performance tuning, indexing, and query optimization. Experience with ETL/ELT tools like AWS Glue, Apache Airflow, dbt, or Talend. Knowledge of big data processing frameworks (Spark, EMR, Presto, Athena). Familiarity with data lake architectures and modern data stack. Proficiency in Python, Shell scripting, or PySpark for automation. Experience working in Agile/DevOps environments with CI/CD pipelines.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2