Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We have an exciting opportunity to join the Macquarie team as a Data Engineer and implement groups data strategy, leveraging cutting edge technology and cloud services. If you are keen to work in the private markets space for one of Macquaries most successful global divisions, then this role could be for you. At Macquarie, our advantage is bringing together diverse people and empowering them to shape all kinds of possibilities. We are a global financial services group operating in 31 markets and with 56 years of unbroken profitability. Youll be part of a friendly and supportive team where everyone - no matter what role - contributes ideas and drives outcomes. What role will you play In this role, you will be involved in designing and managing data pipelines using Python, SQL, and tools like Airflow and DBT Cloud, while collaborating with business teams to develop prototypes based on business requirements also create & maintain data products. You will gain hands-on experience with technologies such as Google Cloud Platform (GCP) services, including BigQuery, to deliver scalable and robust solutions. As a key team member, your strong communication skills and self-motivation will support engagement with stakeholders at all levels. What You Offer Strong proficiency in data technology platforms, including DBT Cloud, GCP BigQuery, and Airflow Solid experience in SQL is mandatory Python, with a good understanding of APIs is advantageous Domain knowledge of asset management and private markets industry, including relevant technologies and business processes Familiarity with cloud platforms such as AWS, GCP, and Azure, along with related services Excellent verbal and written communication skills to effectively engage stakeholders and simplify complex technical concepts for non-technical audiences We love hearing from anyone inspired to build a better future with us, if you&aposre excited about the role or working at Macquarie we encourage you to apply. What We Offer Macquarie employees can access a wide range of benefits which, depending on eligibility criteria, include: Hybrid and flexible working arrangements One wellbeing leave day per year Up to 20 weeks paid parental leave as well as benefits to support you as you transition to life as a working parent Paid volunteer leave and donation matching Other benefits to support your physical, mental and financial wellbeing Access a wide range of learning and development opportunities About Technology Technology enables every aspect of Macquarie, for our people, our customers and our communities. Were a global team that is passionate about accelerating the digital enterprise, connecting people and data, building platforms and applications and designing tomorrows technology solutions. Our commitment to diversity, equity and inclusion We are committed to fostering a diverse, equitable and inclusive workplace.?We encourage people from all backgrounds to apply and welcome all identities, including race, ethnicity, cultural identity, nationality, gender (including gender identity or expression), age, sexual orientation, marital or partnership status, parental, caregiving or family status, neurodiversity, religion or belief, disability, or socio-economic background.?We welcome further discussions on how you can feel included and belong at Macquarie as you progress through our recruitment process. Our aim is to provide reasonable adjustments to individuals who may need support during the recruitment process and through working arrangements. If you require additional assistance, please let us know in the application process. Show more Show less
Posted 2 days ago
6.0 - 10.0 years
0 Lacs
hyderabad, telangana
On-site
As a Lead Data Engineer specializing in Snowflake Migration at Anblicks, you will be a key player in our Data Modernization Center of Excellence (COE). You will be at the forefront of transforming traditional data platforms by utilizing Snowflake, cloud-native tools, and intelligent automation to help enterprises unlock the power of the cloud. Your primary responsibility will be to lead the migration of legacy data warehouses such as Teradata, Netezza, Oracle, or SQL Server to Snowflake. You will re-engineer and modernize ETL pipelines using cloud-native tools and frameworks like DBT, Snowflake Tasks, Streams, and Snowpark. Additionally, you will design robust ELT pipelines on Snowflake that ensure high performance, scalability, and cost optimization, while integrating Snowflake with AWS, Azure, or GCP. In this role, you will also focus on implementing secure and compliant architectures with RBAC, masking policies, Unity Catalog, and SSO. Automation of repeatable tasks, ensuring data quality and parity between source and target systems, and mentoring junior engineers will be essential aspects of your responsibilities. Collaboration with client stakeholders, architects, and delivery teams to define migration strategies, as well as presenting solutions and roadmaps to technical and business leaders, will also be part of your role. To qualify for this position, you should have at least 6 years of experience in Data Engineering or Data Warehousing, with a minimum of 3 years of hands-on experience in Snowflake design and development. Strong expertise in migrating ETL pipelines from Talend and/or Informatica to cloud-native alternatives, proficiency in SQL, data modeling, ELT design, and pipeline performance tuning are prerequisites. Familiarity with tools like DBT Cloud, Airflow, Snowflake Tasks, or similar orchestrators, as well as a solid understanding of cloud data architecture, security frameworks, and data governance, are also required. Preferred qualifications include Snowflake certifications (SnowPro Core and/or SnowPro Advanced Architect), experience with custom migration tools, metadata-driven pipelines, or LLM-based code conversion, familiarity with domain-specific architectures in Retail, Healthcare, or Manufacturing, and prior experience in a COE or modernization-focused consulting environment. By joining Anblicks as a Lead Data Engineer, you will have the opportunity to lead enterprise-wide data modernization programs, tackle complex real-world challenges, and work alongside certified Snowflake architects, cloud engineers, and innovation teams. You will also have the chance to build reusable IP that scales across clients and industries, while experiencing accelerated career growth in the dynamic Data & AI landscape.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
We have an exciting opportunity to join the Macquarie team as a Data Engineer and implement groups data strategy, leveraging cutting edge technology and cloud services. If you are keen to work in the private markets space for one of Macquaries most successful global divisions, then this role could be for you. At Macquarie, our advantage is bringing together diverse people and empowering them to shape all kinds of possibilities. We are a global financial services group operating in 31 markets and with 56 years of unbroken profitability. Youll be part of a friendly and supportive team where everyone - no matter what role - contributes ideas and drives outcomes. In this role, you will be involved in designing and managing data pipelines using Python, SQL, and tools like Airflow and DBT Cloud, while collaborating with business teams to develop prototypes and maintain data products. You will gain hands-on experience with technologies such as Google Cloud Platform (GCP) services, including BigQuery, to deliver scalable and robust solutions. As a key team member, your strong communication skills and self-motivation will support engagement with stakeholders at all levels. You should have a strong proficiency in data technology platforms, including DBT Cloud, GCP BigQuery, and Airflow. Domain knowledge of asset management and private markets industry, including relevant technologies and business processes, is essential. Familiarity with cloud platforms such as AWS, GCP, and Azure, along with related services, is preferred. Excellent verbal and written communication skills are required to effectively engage stakeholders and simplify complex technical concepts for non-technical audiences. Solid experience in SQL and Python, with a good understanding of APIs, is advantageous. We love hearing from anyone inspired to build a better future with us, if you're excited about the role or working at Macquarie we encourage you to apply. Technology enables every aspect of Macquarie, for our people, our customers, and our communities. Were a global team that is passionate about accelerating the digital enterprise, connecting people and data, building platforms and applications, and designing tomorrows technology solutions. Our aim is to provide reasonable adjustments to individuals who may need support during the recruitment process and through working arrangements. If you require additional assistance, please let us know in the application process.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Data Engineer at Macquarie, you have an exciting opportunity to implement the group's data strategy by leveraging cutting-edge technology and cloud services. If you are enthusiastic about working in the private markets space within one of Macquarie's most successful global divisions, then this role could be the perfect fit for you. At Macquarie, we believe in the power of diversity and empowerment, bringing together a team of people from various backgrounds and enabling them to explore endless possibilities. With our global presence in 31 markets and 56 years of continuous profitability, you will join a supportive team where everyone's ideas are valued, and collective efforts drive impactful outcomes. In this role, your responsibilities will include designing and managing data pipelines using Python, SQL, and tools like Airflow and DBT Cloud. You will collaborate closely with business teams to develop prototypes and maintain data products. Additionally, you will have the opportunity to work with cutting-edge technologies such as Google Cloud Platform (GCP) services, specifically BigQuery, to deliver scalable and robust solutions. Your effective communication skills and self-motivation will be crucial in engaging stakeholders at all levels. To excel in this role, you should have a strong proficiency in data technology platforms, including DBT Cloud, GCP BigQuery, and Airflow. Domain knowledge of asset management and the private markets industry, including relevant technologies and business processes, is highly desirable. Familiarity with cloud platforms such as AWS, GCP, and Azure, along with related services, will be beneficial. Excellent verbal and written communication skills are essential for effectively engaging stakeholders and simplifying complex technical concepts for non-technical audiences. Solid experience in SQL and Python, with a good understanding of APIs, will be advantageous. If you are inspired to contribute to building a better future with us and are excited about the role or working at Macquarie, we encourage you to apply and join our team. About Technology: Technology plays a vital role in every aspect of Macquarie, empowering our people, customers, and communities. We are a global team passionate about accelerating the digital enterprise, connecting people and data, building platforms and applications, and designing tomorrow's technology solutions. Our Commitment to Diversity, Equity, and Inclusion: We are committed to providing reasonable adjustments to individuals who may need support during the recruitment process and through working arrangements. If you require additional assistance, please let us know during the application process.,
Posted 2 weeks ago
3.0 - 5.0 years
8 - 15 Lacs
Hyderabad
Work from Office
We are looking for an experienced and results-driven Data Engineer to join our growing Data Engineering team. The ideal candidate will be proficient in building scalable, high-performance data transformation pipelines using Snowflake and dbt and be able to effectively work in a consulting setup. In this role, you will be instrumental in ingesting, transforming, and delivering high-quality data to enable data-driven decision-making across the clients organization. Key Responsibilities 1. Design and implement scalable ELT pipelines using dbt on Snowflake, following industry accepted best practices. 2. Build ingestion pipelines from various sources including relational databases, APIs, cloud storage and flat files into Snowflake. 3. Implement data modelling and transformation logic to support layered architecture (e.g., staging, intermediate, and mart layers or medallion architecture) to enable reliable and reusable data assets.. 4. Leverage orchestration tools (e.g., Airflow,dbt Cloud, or Azure Data Factory) to schedule and monitor data workflows. 5. Apply dbt best practices: modular SQL development, testing, documentation, and version control. 6. Perform performance optimizations in dbt/Snowflake through clustering, query profiling, materialization, partitioning, and efficient SQL design. 7. Apply CI/CD and Git-based workflows for version-controlled deployments. 8. Contribute to growing internal knowledge base of dbt macros, conventions, and testing frameworks. 9. Collaborate with multiple stakeholders such as data analysts, data scientists, and data architects to understand requirements and deliver clean, validated datasets. 10. Write well-documented, maintainable code using Git for version control and CI/CD processes. 11. Participate in Agile ceremonies including sprint planning, stand-ups, and retrospectives. 12. Support consulting engagements through clear documentation, demos, and delivery of client-ready solutions. Required Qualifications 3 to 5 years of experience in data engineering roles, with 2+ years of hands-on experience in Snowflake and DBT. Experience building and deploying DBT models in a production environment. Expert-level SQL and strong understanding of ELT principles. Strong understanding of ELT patterns and data modelling (Kimball/Dimensional preferred). Familiarity with data quality and validation techniques: dbt tests, dbt docs etc. Experience with Git, CI/CD, and deployment workflows in a team setting Familiarity with orchestrating workflows using tools like dbt Cloud, Airflow, or Azure Data Factory. Core Competencies: o Data Engineering and ELT Development: Building robust and modular data pipelines using dbt. Writing efficient SQL for data transformation and performance tuning in Snowflake. Managing environments, sources, and deployment pipelines in dbt. o Cloud Data Platform Expertise: Strong proficiency with Snowflake: warehouse sizing, query profiling, data loading, and performance optimization. Experience working with cloud storage (Azure Data Lake, AWS S3, or GCS) for ingestion and external stages. Technical Toolset: o Languages & Frameworks: Python: For data transformation, notebook development, automation. SQL: Strong grasp of SQL for querying and performance tuning. Best Practices and Standards: o Knowledge of modern data architecture concepts including layered architecture (e.g., staging, intermediate, marts, Medallion architecture). Familiarity with data quality, unit testing (dbt tests), and documentation (dbt docs). Security & Governance: o Access and Permissions: Understanding of access control within Snowflake (RBAC), role hierarchies, and secure data handling. Familiar with data privacy policies (GDPR basics), encryption at rest/in transit. Deployment & Monitoring: o DevOps and Automation: Version control using Git, experience with CI/CD practices in a data context. Monitoring and logging of pipeline executions, alerting on failures. Soft Skills: o Communication & Collaboration: Ability to present solutions and handle client demos/discussions. Work closely with onshore and offshore team of analysts, data scientists, and architects. Ability to document pipelines and transformations clearly. Basic Agile/Scrum familiarity working in sprints and logging tasks. Comfort with ambiguity, competing priorities and fast-changing client environment. Education: o Bachelors or masters degree in computer science, Data Engineering, or a related field. o Certifications such as Snowflake SnowPro, dbt Certified Developer Data Engineering are a plus.
Posted 2 months ago
3.0 - 5.0 years
8 - 15 Lacs
Hyderabad
Work from Office
We are looking for an experienced and results-driven Data Engineer to join our growing Data Engineering team. The ideal candidate will be proficient in building scalable, high-performance data transformation pipelines using Snowflake and dbt and be able to effectively work in a consulting setup. In this role, you will be instrumental in ingesting, transforming, and delivering high-quality data to enable data-driven decision-making across the clients organization. Key Responsibilities 1. Design and implement scalable ELT pipelines using dbt on Snowflake, following industry accepted best practices. 2. Build ingestion pipelines from various sources including relational databases, APIs, cloud storage and flat files into Snowflake. 3. Implement data modelling and transformation logic to support layered architecture (e.g., staging, intermediate, and mart layers or medallion architecture) to enable reliable and reusable data assets.. 4. Leverage orchestration tools (e.g., Airflow,dbt Cloud, or Azure Data Factory) to schedule and monitor data workflows. 5. Apply dbt best practices: modular SQL development, testing, documentation, and version control. 6. Perform performance optimizations in dbt/Snowflake through clustering, query profiling, materialization, partitioning, and efficient SQL design. 7. Apply CI/CD and Git-based workflows for version-controlled deployments. 8. Contribute to growing internal knowledge base of dbt macros, conventions, and testing frameworks. 9. Collaborate with multiple stakeholders such as data analysts, data scientists, and data architects to understand requirements and deliver clean, validated datasets. 10. Write well-documented, maintainable code using Git for version control and CI/CD processes. 11. Participate in Agile ceremonies including sprint planning, stand-ups, and retrospectives. 12. Support consulting engagements through clear documentation, demos, and delivery of client-ready solutions. Required Qualifications 3 to 5 years of experience in data engineering roles, with 2+ years of hands-on experience in Snowflake and DBT. Experience building and deploying DBT models in a production environment. Expert-level SQL and strong understanding of ELT principles. Strong understanding of ELT patterns and data modelling (Kimball/Dimensional preferred). Familiarity with data quality and validation techniques: dbt tests, dbt docs etc. Experience with Git, CI/CD, and deployment workflows in a team setting Familiarity with orchestrating workflows using tools like dbt Cloud, Airflow, or Azure Data Factory. Core Competencies: o Data Engineering and ELT Development: Building robust and modular data pipelines using dbt. Writing efficient SQL for data transformation and performance tuning in Snowflake. Managing environments, sources, and deployment pipelines in dbt. o Cloud Data Platform Expertise: Strong proficiency with Snowflake: warehouse sizing, query profiling, data loading, and performance optimization. Experience working with cloud storage (Azure Data Lake, AWS S3, or GCS) for ingestion and external stages. ' Technical Toolset: o Languages & Frameworks: Python: For data transformation, notebook development, automation. SQL: Strong grasp of SQL for querying and performance tuning. Best Practices and Standards: o Knowledge of modern data architecture concepts including layered architecture (e.g., staging ? intermediate ? marts, Medallion architecture). Familiarity with data quality, unit testing (dbt tests), and documentation (dbt docs). Security & Governance: o Access and Permissions: Understanding of access control within Snowflake (RBAC), role hierarchies, and secure data handling. Familiar with data privacy policies (GDPR basics), encryption at rest/in transit. Deployment & Monitoring: o DevOps and Automation: Version control using Git, experience with CI/CD practices in a data context. Monitoring and logging of pipeline executions, alerting on failures. Soft Skills: o Communication & Collaboration: Ability to present solutions and handle client demos/discussions. Work closely with onshore and offshore team of analysts, data scientists, and architects. Ability to document pipelines and transformations clearly. Basic Agile/Scrum familiarity working in sprints and logging tasks. Comfort with ambiguity, competing priorities and fast-changing client environment. Education: o Bachelors or masters degree in computer science, Data Engineering, or a related field. o Certifications such as Snowflake SnowPro, dbt Certified Developer Data Engineering are a plus.
Posted 2 months ago
6.0 - 10.0 years
16 - 25 Lacs
Hyderabad
Work from Office
Key Responsibilities Architect and implement modular, test-driven ELT pipelines using dbt on Snowflake. Design layered data models (e.g., staging, intermediate, mart layers / medallion architecture) aligned with dbt best practices. Lead ingestion of structured and semi-structured data from APIs, flat files, cloud storage (Azure Data Lake, AWS S3), and databases into Snowflake. Optimize Snowflake for performance and cost: warehouse sizing, clustering, materializations, query profiling, and credit monitoring. Apply advanced dbt capabilities including macros, packages, custom tests, sources, exposures, and documentation using dbt docs. Orchestrate workflows using dbt Cloud, Airflow, or Azure Data Factory, integrated with CI/CD pipelines. Define and enforce data governance and compliance practices using Snowflake RBAC, secure data sharing, and encryption strategies. Collaborate with analysts, data scientists, architects, and business stakeholders to deliver validated, business-ready data assets. Mentor junior engineers, lead architectural/code reviews, and help establish reusable frameworks and standards. Engage with clients to gather requirements, present solutions, and manage end-to-end project delivery in a consulting setup Required Qualifications 5 to 8 years of experience in data engineering roles, with 3+ years of hands-on experience working with Snowflake and dbt in production environments. Technical Skills: o Cloud Data Warehouse & Transformation Stack: Expert-level knowledge of SQL and Snowflake, including performance optimization, storage layers, query profiling, clustering, and cost management. Experience in dbt development: modular model design, macros, tests, documentation, and version control using Git. o Orchestration and Integration: Proficiency in orchestrating workflows using dbt Cloud, Airflow, or Azure Data Factory. Comfortable working with data ingestion from cloud storage (e.g., Azure Data Lake, AWS S3) and APIs. Data Modelling and Architecture: Dimensional modelling (Star/Snowflake schemas), Slowly changing dimensions. ' Knowledge of modern data warehousing principles. Experience implementing Medallion Architecture (Bronze/Silver/Gold layers). Experience working with Parquet, JSON, CSV, or other data formats. o Programming Languages: Python: For data transformation, notebook development, automation. SQL: Strong grasp of SQL for querying and performance tuning. Jinja (nice to have): Exposure to Jinja for advanced dbt development. o Data Engineering & Analytical Skills: ETL/ELT pipeline design and optimization. Exposure to AI/ML data pipelines, feature stores, or MLflow for model tracking (good to have). Exposure to data quality and validation frameworks. o Security & Governance: Experience implementing data quality checks using dbt tests. Data encryption, secure key management and security best practices for Snowflake and dbt. Soft Skills & Leadership: Ability to thrive in client-facing roles with competing/changing priorities and fast-paced delivery cycles. Stakeholder Communication: Collaborate with business stakeholders to understand objectives and convert them into actionable data engineering designs. Project Ownership: End-to-end delivery including design, implementation, and monitoring. Mentorship: Guide junior engineers and establish best practices; Build new skill in the team. Agile Practices: Work in sprints, participate in scrum ceremonies, story estimation. Education: Bachelors or masters degree in computer science, Data Engineering, or a related field. Certifications such as Snowflake SnowPro Advanced, dbt Certified Developer are a plus.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough