Jobs
Interviews

398 Dbt Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

16 - 25 Lacs

Coimbatore

Work from Office

Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications

Posted 1 week ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Chandigarh

Work from Office

Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications

Posted 1 week ago

Apply

8.0 - 12.0 years

15 - 25 Lacs

Noida, Hyderabad, Gurugram

Hybrid

Job Title: Senior Data Engineer Snowflake, Python, DBT Experience: 8 to 12 Years Only Location: Hyderabad/Gurgaon/Noida Employment Type: Full-Time Notice Period : 0-30 only Job Summary: We are hiring a highly skilled Data Engineer with strong hands-on experience in Snowflake, Python, DBT , and modern data architecture . You will be responsible for designing and maintaining robust data pipelines and scalable warehouse solutions to power analytics and business insights. Key Responsibilities: Design and implement scalable ETL/ELT data pipelines Build and optimize Snowflake data warehouse architecture Develop DBT models for transformation, lineage, and documentation Write Python scripts for data processing and workflow automation Ensure data quality, integrity, and governance Collaborate with analysts, scientists, and business teams Implement CI/CD, version control, and performance tuning Monitor and troubleshoot data pipeline issues Required Skills: Mandatory: Snowflake (incl. Snowpark), Python, DBT, SQL, Data Modeling Good to Have: Airflow/Prefect, AWS/Azure/GCP, Container Services Strong understanding of star/snowflake schema and data warehouse design Experience with large-scale data sets and orchestration tools

Posted 1 week ago

Apply

8.0 - 12.0 years

10 - 20 Lacs

Noida, Hyderabad, Delhi / NCR

Hybrid

Experience: 8-12Yrs Notice period: Preference will be given to candidates who can join immediately or are available on short notice . We are looking for a highly skilled Data Engineer with hands-on experience in Snowflake Python DBT and modern data architecture The ideal candidate will be responsible for designing building and maintaining scalable data pipelines and data warehouse solutions that support analytics and business intelligence initiatives.

Posted 1 week ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Patna

Work from Office

Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications

Posted 1 week ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Guwahati

Work from Office

Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications

Posted 1 week ago

Apply

10.0 - 12.0 years

10 - 12 Lacs

Hyderabad, Telangana, India

On-site

Design, develop, and maintain data pipelines and ETL processes using AWS and Snowflake. Implement data transformation workflows using DBT (Data Build Tool). Write efficient, reusable, and reliable code in Python. Optimize and tune data solutions for performance and scalability. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Ensure data quality and integrity through rigorous testing and validation. Stay updated with the latest industry trends and technologies in data engineering. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Proven experience as a Data Engineer or similar role. Strong proficiency in AWS and Snowflake. Expertise in DBT and Python programming. Experience with data modeling, ETL processes, and data warehousing. Familiarity with cloud platforms and services. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

navi mumbai, maharashtra

On-site

You are currently seeking a full-time, office-based Data Engineer to join the Information Technology team at our rapidly growing corporate activities. In this role, you will work collaboratively with a team on tasks and projects crucial to the company's success. If you are looking for a dynamic career opportunity to utilize your expertise and further develop and enhance your skills, this position is the perfect fit for you. Your responsibilities will include utilizing your skills in data warehousing, business intelligence, and databases such as Snowflake, ANSI SQL, SQL Server, and T-SQL. You will support programming/software development using ETL and ELT tools like dbt, Azure Data Factory, and SSIS. Designing, developing, enhancing, and supporting business intelligence systems primarily using Microsoft Power BI will be a key part of your role. Additionally, you will be responsible for collecting, analyzing, and documenting user requirements, participating in software validation processes, creating software applications following the software development lifecycle, and providing end-user support for applications. To qualify for this position, you should have a Bachelor's Degree in Computer Science, Data Science, or a related field, along with at least 3 years of experience in Data Engineering. Knowledge of developing dimensional data models, understanding of Star Schema and Snowflake schema designs, solid ETL development and reporting knowledge, and familiarity with Snowflake cloud data warehouse, Fivetran data integration, and dbt transformations are preferred. Proficiency in Python, REST API, SQL Server databases, and bonus knowledge of C#, Azure development is desired. Excellent analytical, written, and oral communication skills are essential for this role. Medpace is a full-service clinical contract research organization (CRO) dedicated to providing Phase I-IV clinical development services to the biotechnology, pharmaceutical, and medical device industries. With a mission to accelerate the global development of safe and effective medical therapeutics, Medpace leverages local regulatory and therapeutic expertise across various major areas. Headquartered in Cincinnati, Ohio, Medpace employs over 5,000 individuals across 40+ countries. At Medpace, you will be part of a team that makes a positive impact on the lives of patients and families facing various diseases. The work you do today will contribute to improving the lives of individuals living with illness and disease in the future. Medpace offers a flexible work environment, competitive compensation and benefits package, structured career paths for professional growth, company-sponsored employee appreciation events, and employee health and wellness initiatives. Medpace has been recognized by Forbes as one of America's Most Successful Midsize Companies and has received CRO Leadership Awards for expertise, quality, capabilities, reliability, and compatibility. If your qualifications align with the requirements of the position, a Medpace team member will review your profile, and if interested, you will be contacted with further details on the next steps. Join us at Medpace and be a part of a team driven by People, Purpose, and Passion to make a difference tomorrow.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

vijayawada, andhra pradesh

On-site

As a Lead Data Engineer based in Vijayawada, Andhra Pradesh, you will be responsible for leveraging your extensive experience in data engineering and data architecture to design and develop end-to-end data solutions, data pipelines, and ETL processes. With a Bachelor's or Master's degree in Computer Science, Information Systems, or a related field, along with over 10 years of relevant experience, you will play a crucial role in ensuring the success of data projects. You will demonstrate your strong knowledge in data technologies such as Snowflake, Databricks, Apache Spark, Hadoop, Dbt, Fivetran, and Azure Data Factory. Your expertise in Python and SQL will be essential in tackling complex data challenges. Furthermore, your understanding of data governance, data quality, and data security principles will guide you in maintaining high standards of data management. In this role, your excellent problem-solving and analytical skills will be put to the test as you work both independently and collaboratively in an Agile environment. Your strong communication and leadership skills will be instrumental in managing projects, teams, and engaging in pre-sales activities. You will have the opportunity to showcase your technical leadership abilities by delivering solutions within defined timeframes and building strong client relationships. Moreover, your experience in complete project life cycle activities, agile methodologies, and working with globally distributed teams will be valuable assets in this position. Your proven track record of success in managing complex consulting projects and your ability to effectively communicate with technical and non-technical staff will contribute to the overall success of the team. If you are looking for a challenging role that combines technical expertise, leadership skills, and client engagement, this Lead Data Engineer position offers a dynamic opportunity to excel in a fast-paced and collaborative environment.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

You will be joining a fast-growing data-analytics consultancy focused on Life Sciences / Pharmaceutical commercial analytics. Our team specializes in building cloud-native data platforms to provide sales, marketing, and patient-centric insights for top global pharma brands, ensuring compliant and high-impact solutions on an enterprise scale. As a Data Engineer in this role, you will be responsible for architecting, constructing, and optimizing Snowflake data warehouses and ELT pipelines using SQL, Streams, Tasks, UDFs, and Stored Procedures to cater to complex commercial-analytics workloads. You will also work on integrating various pharma data sources such as Veeva, Salesforce, IQVIA, Symphony, RWD, and patient-services feeds through Fivetran, ADF, or Python-based frameworks to ensure end-to-end data quality. Your duties will involve establishing robust data models (star, snowflake, Data Vault) that are tailored for sales reporting, market-share analytics, and AI/ML use-cases. You will drive governance and compliance efforts (HIPAA, GDPR, GxP) by implementing fine-grained access controls, masking, lineage, and metadata management. Additionally, you will lead code reviews, mentor engineers, optimize performance, and ensure cost-efficient compute usage. Collaboration with business stakeholders to translate commercial objectives into scalable data solutions and actionable insights will be a key aspect of your role. You will need to have at least 7 years of data-engineering / warehousing experience, including a minimum of 4 years of hands-on Snowflake design and development experience. Expertise in SQL, data modeling (Dimensional, Data Vault), ETL/ELT optimization, and proficiency in Python (or similar) for automation, API integrations, and orchestration are essential qualifications. Strong governance/security acumen within regulated industries (HIPAA, GDPR, PII), a Bachelor's degree in Computer Science, Engineering, or Information Systems (Masters preferred), and excellent client-facing communication and problem-solving skills in fast-paced, agile environments are required. Direct experience with pharma commercial datasets, cloud-platform depth (AWS, Azure, or GCP), familiarity with tools like Matillion/DBT/Airflow, Git, Snowflake certifications (SnowPro Core / Advanced), and knowledge of Tableau, Power BI, or Qlik connectivity are preferred qualifications. This is a full-time position that requires in-person work. If you are interested in this opportunity, please speak with the employer at +91 9008078505.,

Posted 1 week ago

Apply

5.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Data Analytics and Quality Engineer with 7 to 14 years of experience, you will play a crucial role in ensuring the quality of analytics products within our organization. Your responsibilities will include designing and documenting testing scenarios, creating test plans, and reviewing quality specifications and technical designs for both existing and new analytics products. You will collaborate closely with the Data & Analytics team to drive data quality programs and implement automated test frameworks within an agile team structure. Your expertise in QA processes, mentoring, ETL testing, data validation, data quality, and knowledge of RCM or US Healthcare will be essential in this role. Proficiency in programming languages such as SQL (T-SQL or PL/SQL) is a must, while knowledge of Python is a plus. Hands-on experience with tools like SSMS, Toad, BI tools (Tableau, Power BI), SSIS, ADF, and Snowflake will be beneficial. Familiarity with data testing tools like Great Expectations, Deequ, dbt, and Pytest for data scripts is desirable. Your educational background should include a Bachelor's degree in computer science, Information Technology, Data Science, Math, Finance, or a related field, along with a minimum of 5 years of experience as a quality assurance engineer or data analyst with a strong focus on data quality. Preferred qualifications include QA-related certifications and a strong understanding of US healthcare revenue cycle and billing. In this role, you will be responsible for test execution for healthcare analytics, creation of detailed test plans and test cases, and ensuring that production system defects are documented and resolved promptly. Your ability to design testing procedures, write testing scripts, and monitor testing results according to best practices will be crucial in ensuring that our analytics meet established quality standards. Your knowledge of test case management tools, Agile development tools, data quality frameworks, and automated testing tools will be valuable assets in this position. Additionally, your proficiency in SQL, ability to test data systems for performance and scalability, and strong analytical skills will contribute to the success of our analytics products. Strong communication skills, process improvement abilities, and time management skills are also essential for this role. If you are looking to join a growing and innovative organization where you can work with new technology in both manual and automation testing environments, this Senior Quality Assurance Engineer position is an ideal opportunity for you.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

You are an experienced Senior dbt Engineer with a strong background in Snowflake and Azure cloud platforms. Your primary responsibility will be to lead the design and development of scalable, governed, and efficient data transformation pipelines using dbt. You will collaborate across functions to deliver business-ready data solutions. With at least 8 years of experience in data engineering, analytics engineering, or similar roles, you have proven expertise in dbt (Data Build Tool) and modern data transformation practices. Your advanced proficiency in SQL and deep understanding of dimensional modeling, medallion architecture, and ELT principles will be crucial for success in this role. You must have strong hands-on experience with Snowflake, including query optimization, and be proficient with Azure cloud services such as Azure Data Factory and Blob Storage. Your communication and collaboration skills should be exemplary, and you should also have familiarity with data governance, metadata management, and data quality frameworks. As a Senior dbt Engineer, your key responsibilities will include leading the design, development, and maintenance of dbt models and transformation layers. You will define and enforce data modeling standards, best practices, and development guidelines while driving the end-to-end ELT process to ensure reliability and data quality across all layers. Collaboration with data product owners, analysts, and stakeholders to translate complex business needs into clean, reusable data assets is essential. You will utilize best practices on Snowflake to build scalable and robust dbt models and integrate dbt workflows with orchestration tools like Azure Data Factory, Apache Airflow, or dbt Cloud for robust monitoring and alerting. Supporting CI/CD implementation for dbt deployments using tools like GitHub Actions, Azure DevOps, or similar will also be part of your responsibilities. If you are looking for a challenging opportunity to leverage your expertise in dbt, Snowflake, and Azure cloud platforms to drive digital transformation and deliver impactful data solutions, this role is perfect for you.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

Genpact is a global professional services and solutions firm dedicated to delivering outcomes that shape the future. With a workforce of over 125,000 professionals spanning across more than 30 countries, we are fueled by our innate curiosity, entrepreneurial agility, and commitment to creating lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, drives us to serve and transform leading enterprises, including the Fortune Global 500, leveraging our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Principal Consultant- Databricks Lead Developer. As a Databricks Developer in this role, you will be tasked with solving cutting-edge real-world problems to meet both functional and non-functional requirements. Responsibilities: - Keep abreast of new and emerging technologies and assess their potential application for service offerings and products. - Collaborate with architects and lead engineers to devise solutions that meet functional and non-functional requirements. - Demonstrate proficiency in understanding relevant industry trends and standards. - Showcase strong analytical and technical problem-solving skills. - Possess experience in the Data Engineering domain. Qualifications we are looking for: Minimum qualifications: - Bachelor's Degree or equivalency in CS, CE, CIS, IS, MIS, or an engineering discipline, or equivalent work experience. - <<>> years of experience in IT. - Familiarity with new and emerging technologies and their possible applications for service offerings and products. - Collaboration with architects and lead engineers to develop solutions meeting functional and non-functional requirements. - Understanding of industry trends and standards. - Strong analytical and technical problem-solving abilities. - Proficiency in either Python or Scala, preferably Python. - Experience in the Data Engineering domain. Preferred qualifications: - Knowledge of Unity catalog and basic governance. - Understanding of Databricks SQL Endpoint. - Experience with CI/CD for building Databricks job pipelines. - Exposure to migration projects for building Unified data platforms. - Familiarity with DBT, Docker, and Kubernetes. If you are a proactive individual with a passion for innovation and a strong commitment to continuous learning and upskilling, we invite you to apply for this exciting opportunity to join our team at Genpact.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a DevOps Engineer in Europe, you will be utilizing your expertise in Informatica Powercenter and PowerExchange, Datavault modelling, and Snowflake to ensure seamless ETL development. With a minimum of 3 years of experience in ETL development, especially with Informatica Powercenter and Datavault modelling, you will play a crucial role in the successful execution of projects. Your proficiency in DevOps and SAFe will further enhance your ability to streamline processes and deliver high-quality results. Additionally, your hands-on experience in a scrum team, preferably as a scrum master or with the ambition to take on the scrum master role, will be instrumental in driving collaboration and efficiency within the team. If you are passionate about leveraging your skills in a dynamic environment and contributing to the success of projects, we encourage you to apply for this exciting opportunity.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As an experienced Software/Data Engineer with a passion for creating meaningful solutions, you will be joining a global team of innovators at a Siemens company. In this role, you will be responsible for developing data integration solutions using Java, Scala, and/or Python, with a focus on data and Business Intelligence (BI). Your primary responsibilities will include building data pipelines, data transformation, and data modeling to support various integration methods and information delivery techniques. To excel in this position, you should have a Bachelor's degree in an Engineering or Science discipline or equivalent experience, along with at least 5 years of software/data engineering experience. Additionally, you should have a minimum of 3 years of experience in a data and BI focused role. Proficiency in data integration development using languages such as Python, PySpark, and SparkSQL, as well as experience with relational databases and SQL optimization, are essential for this role. Experience with AWS-based data services technologies (e.g., Glue, RDS, Athena) and Snowflake CDW, along with familiarity with BI tools like PowerBI, will be beneficial. Your willingness to experiment with new technologies and adapt to agile development practices will be key to your success in this role. Join us in creating a brighter future where smarter infrastructure protects the environment and connects us all. Our culture is built on collaboration, support, and a commitment to helping each other grow both personally and professionally. If you are looking to make a positive impact and contribute to a more sustainable world, we invite you to explore how far your passion can take you with us.,

Posted 1 week ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Kochi

Work from Office

Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications

Posted 1 week ago

Apply

6.0 - 10.0 years

16 - 25 Lacs

Mysuru

Work from Office

Job Summary: We are looking for a highly skilled Senior Data Engineer with expertise in Snowflake, DBT (Data Build Tool), and SAP Data Services (SAP DS). The ideal candidate will be responsible for building scalable data pipelines, designing robust data models, and ensuring high data quality across enterprise platforms. Key Responsibilities: Design, build, and optimize data pipelines and ETL/ELT workflows using Snowflake and DBT Integrate and manage data from various sources using SAP Data Services Develop and maintain scalable data models, data marts, and data warehouses Work closely with data analysts, business stakeholders, and BI teams to support reporting and analytics needs Implement best practices in data governance, data lineage, and metadata management Monitor data quality, troubleshoot issues, and ensure data integrity Optimize Snowflake data warehouse performance (partitioning, caching, query tuning) Automate data workflows and deploy DBT models with CI/CD tools (e.g., Git, Jenkins) Document architecture, data flows, and technical specifications

Posted 1 week ago

Apply

5.0 - 10.0 years

14 - 19 Lacs

Bengaluru

Remote

Role: Azure Specialist-DBT Location: Bangalore Mode: Remote Education and Work Experience Requirements: •Overall, 5 to 9 years of experience in IT Industry. Min 6 years of experience working on Data Engineering. Translate complex business requirements into analytical SQL views using DBT Support data ingestion pipelines using Airflow and Data Factory Develop DBT macros to enable scalable and reusable code automation Mandatory Skills: Strong experience with DBT (Data Build Tool)( (or strong SQL / relational DWH knowledge)-Must have Proficiency in SQL and strong understanding of relational data warehouse concept Hands-on experience with Databricks (primarily Databricks SQL) good to have Familiarity with Apache Airflow and Azure Data Factory – nice to have Experience working with Snowflake – nice to have Additional Information: Qualifications - BE, MS, M.Tech or MCA. Certifications: Azure Big Data, Databricks Certified Associate

Posted 1 week ago

Apply

10.0 - 16.0 years

15 - 30 Lacs

Chennai, Coimbatore, Bengaluru

Hybrid

Experience- 10+ Years Work Location - Chennai, Bangalore, Coimbatore, Pune Company - MNC. Notice Period- Immediate - 15 days Role - Data Engineer Job Description Bachelor's degree in computer science, engineering, or similar quantitative field We are looking for a skilled Data Engineer to join our growing data team. The ideal candidate will be responsible for designing, building, and maintaining data pipelines and infrastructure to support data analytics and business intelligence needs. A strong foundation in cloud data platforms, data transformation tools, and programming is essential. Key Responsibilities: • Design and implement scalable data pipelines using Azure Data Lake and dbt. • Ingest and transform data from various sources including databases, APIs, flat files, JSON, and XML. • Work with Snowflake to optimize storage, processing, and query performance. • Collaborate with analysts and business users to understand data requirements. • Integrate workflows using Apache Airflow for orchestration. • Support and contribute to the development of reporting and visualization tools using Power BI (good to have). • Write clean, efficient, and maintainable code using Python. • Ensure data quality, integrity, and governance across pipelines and platforms. Must-Have Skills: • Strong experience with Azure Data Lake. • Proficient in dbt for data transformation and modeling. • Excellent programming skills in Python. • Experience working with data from diverse sources: databases, APIs, flat files, JSON, and XML. Should-Have Skills: • Good working knowledge of Snowflake, beyond basic querying. • Familiarity with Apache Airflow for job orchestration and scheduling. Interested candidates kindly share their resume at piyush.kumar@axiomsoftwaresolutions.com

Posted 1 week ago

Apply

1.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

You have a fantastic opportunity to join as a SnowFlake Data Engineering Technical Lead with a strong background in SnowFlake, DBT, SQL, Python, and Data Warehousing. As a Technical Lead, you should have at least 7 years of experience in data engineering or related fields. Your expertise should include an expert-level proficiency in SQL, along with a solid understanding of data modeling principles, including star and snowflake schemas. Your role will require a minimum of 3 years of hands-on experience specifically with Snowflake, focusing on performance tuning, security, and warehouse management. Additionally, you should possess at least 1 year of experience in building modular and maintainable data transformations using DBT. Proficiency in Python for scripting, automation, and data manipulation is essential for this position. It would be beneficial to have familiarity with cloud platforms such as AWS, Azure, or GCP, as well as experience with orchestration tools like Airflow and DBT Cloud. A good understanding of data warehousing concepts is also necessary for this role. Your responsibilities will include monitoring and enhancing data pipeline performance, cost, and reliability. You will be expected to provide mentorship and technical leadership to junior data engineers and analysts, ensuring data quality through rigorous testing, validation, and documentation practices. Additionally, you will play a key role in establishing data engineering standards and contributing to the overall data strategy. While not mandatory, experience in Airflow, Informatica PowerCenter, MS SQL, and Oracle would be advantageous. A solid understanding of the Software Development Life Cycle (SDLC) and Agile methodologies is preferred. Effective communication with customers and the ability to produce daily status reports are vital aspects of this role. To excel in this position, you must possess excellent oral and written communication skills, work well within a team environment, and demonstrate proactive and adaptive behavior. This role offers a unique opportunity to showcase your expertise in SnowFlake Data Engineering and contribute to the advancement of data-driven initiatives within the organization.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

Blenheim Chalcot is a prominent venture builder with a track record of over 20 years in creating disruptive businesses across various sectors such as FinTech, EdTech, GovTech, Media, Sport, Charity, and more. The ventures developed by Blenheim Chalcot are all GenAI enabled, positioning them as some of the most innovative companies in the UK and globally. The team at Blenheim Chalcot India plays a vital role in the growth and success of the organization. Since its establishment in 2009, Blenheim Chalcot India has been a launchpad for individuals looking to drive innovation and entrepreneurship. Driven by a mission to empower visionaries, Blenheim Chalcot India focuses on enabling individuals to lead, innovate, and create disruptive solutions. The organization offers a wide range of services to support new businesses, including technology, growth (marketing and sales), talent, HR, finance, legal, and tax services. Fospha, a MarTech venture under Blenheim Chalcot, is experiencing rapid growth and is seeking energetic and motivated individuals to join their team. Fospha is a marketing measurement platform catering to eCommerce brands, having achieved product/market fit and garnered recognition as a market leader with significant growth and accolades. **Key Responsibilities:** - Lead and mentor a team of data engineers, fostering a collaborative culture focused on continuous improvement. - Plan and execute data projects in alignment with business objectives and timelines. - Provide technical guidance to the team, emphasizing best practices in data engineering. - Implement and maintain ELT processes using scalable data pipelines and architecture. - Collaborate with cross-functional teams to understand data requirements and deliver effective solutions. - Ensure data integrity and quality across diverse data sources. - Support data-driven decision-making by delivering clean, reliable, and timely data. - Define high-quality data standards for Data Science and Analytics use-cases and contribute to shaping the data roadmap. - Design, develop, and maintain data models for ML Engineers, Data Analysts, and Data Scientists. - Conduct exploratory data analysis to identify patterns and trends. - Identify opportunities for process enhancement and drive continuous improvement in data operations. - Stay informed about industry trends, technologies, and best practices in data engineering. **About You:** The ideal candidate will have a proven track record of delivering results in a fast-paced environment, demonstrating comfort with change and uncertainty. **Required:** - Prior experience in leading a team with final tech sign-off responsibilities. - Proficiency in PostgreSQL, SQL technologies, and Python programming. - Understanding of data architecture, pipelines, ELT flows, and agile methodologies. **Preferred:** - Experience with dbt (Data Build Tool) and pipeline technologies within AWS. - Knowledge of data modeling, statistics, and related tools. **Education Qualifications:** Bachelor's or Master's degree in Computer Science, Engineering, or a related field. **What We Offer:** - Opportunity to be part of the World's Leading Digital Venture Builder. - Exposure to diverse talent within BC and opportunities for continuous learning. - Work 4 days a week from office. - Engagement with challenges in a culture that supports learning and development. - Fun and open atmosphere, enriched with a passion for cricket. - Generous annual leave, maternity and paternity leaves, and private medical benefits. At Blenheim Chalcot, we champion diversity, meritocracy, and a culture of inclusion where individual capabilities and potential are highly valued. Our commitment to recruiting, developing, and advancing individuals based on their skills and talent underscores our belief in the diversity, agility, generosity, and curiosity of our people as the driving force behind our organization's success.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You should have proficiency in SQL, Python, and/or other relevant programming languages. Experience with DBT and similar data transformation platforms is required, along with experience with Airflow or similar data orchestration tools. It is essential to have familiarity with data warehouse solutions such as Snowflake, Redshift. You must demonstrate a proven ability to work autonomously and manage workload effectively, as well as proven experience working with cross-functional teams. Familiarity with iPaaS solutions like Workato, Celigo, MuleSoft is a plus. Experience with enterprise business applications such as Salesforce, NetSuite, SuiteProjects Pro, Jira is also a plus. Knowledge of cloud platforms like AWS, GCP, Azure, and related services is advantageous. The ideal candidate should have 8+ years of overall experience with 3-5 years specifically in data engineering or a related field.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

navi mumbai, maharashtra

On-site

As a Product Owner at PitchBook, you will collaborate with key stakeholders and teams to deliver the department's product roadmap. Your role will involve aligning engineering activities with product objectives for new product capabilities, data, and scaling improvements, focusing on AI/ML data extraction, collection, and enrichment capabilities. Working within the Data Technology team, you will develop solutions to support and accelerate data operations processes, impacting core workflows of data capture, ingestion, and hygiene across private and public capital markets datasets. You will work on AI/ML Collections Data Extraction & Enrichment teams, closely integrated with Engineering and Product Management to ensure alignment with the Product Roadmap. Your responsibilities will include being a domain expert for your product area(s), defining backlog priorities, managing feature delivery according to the Product Roadmap, validating requirements, creating user stories and acceptance criteria, communicating with stakeholders, defining metrics for team performance, managing risks or blockers, and supporting AI/ML collections work. To be successful in this role, you should have a Bachelor's degree in Information Systems, Engineering, Data Science, Business Administration, or a related field, along with 3+ years of experience as a Product Manager or Product Owner within AI/ML or enterprise SaaS domains. You should have a proven track record of shipping high-impact data pipeline or data collection-related tools and services, familiarity with AI/ML workflows, experience collaborating with globally distributed teams, excellent communication skills, a bias for action, and attention to detail. Preferred qualifications include direct experience with applied AI/ML Engineering services, a background in fintech, experience with data quality measurements and ML model evaluation, exposure to cloud-based ML infrastructure and data pipeline orchestration tools, and certifications related to Agile Product Ownership / Product Management. This position offers a standard office setting with the use of a PC and phone throughout the day, collaboration with stakeholders in Seattle and New York, and limited corporate travel may be required. Morningstar's hybrid work environment allows for remote work and in-person collaboration, with benefits to enhance flexibility. Join us at PitchBook to engage meaningfully with global colleagues and contribute to our values and vision.,

Posted 2 weeks ago

Apply

3.0 - 9.0 years

0 Lacs

kolhapur, maharashtra

On-site

As a Senior Snowflake Data Engineer, you will be responsible for leveraging your expertise in Snowflake, SQL/PLSQL, DBT or any ETL, Python, and DevOps to design, develop, and maintain efficient data pipelines and processes. Your role will involve working with large datasets, optimizing queries, and ensuring data quality and integrity. You will be expected to collaborate with cross-functional teams to understand business requirements, translate them into technical solutions, and implement data models that support analytical and reporting needs. Additionally, your experience in DevOps will be valuable in ensuring the scalability, reliability, and performance of data systems. The ideal candidate for this role will have a minimum of 3-9 years of experience in data engineering, with a strong foundation in Snowflake and proficiency in SQL/PLSQL. Knowledge of ETL tools like DBT, programming skills in Python, and an understanding of DevOps principles will be essential for success in this position. This position is based in Kolhapur and offers the opportunity to work in a dynamic environment where you can contribute to the organization's data-driven decision-making processes. If you are passionate about data engineering and enjoy working with cutting-edge technologies, we welcome you to apply for this exciting opportunity.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You are an experienced Data Engineer who will be responsible for leading the end-to-end migration of the data analytics and reporting environment to Looker at Frequence. Your role will involve designing scalable data models, translating business logic into LookML, and empowering teams across the organization with self-service analytics and actionable insights. You will collaborate closely with stakeholders from data, engineering, and business teams to ensure a smooth transition to Looker, establish best practices for data modeling, governance, and dashboard development. Your responsibilities will include: - Leading the migration of existing BI tools, dashboards, and reporting infrastructure to Looker - Designing, developing, and maintaining scalable LookML data models, dimensions, measures, and explores - Creating intuitive, actionable, and visually compelling Looker dashboards and reports - Collaborating with data engineers and analysts to ensure consistency across data sources - Translating business requirements into technical specifications and LookML implementations - Optimizing SQL queries and LookML models for performance and scalability - Implementing and managing Looker's security settings, permissions, and user roles in alignment with data governance standards - Troubleshooting issues and supporting end users in their Looker adoption - Maintaining version control of LookML projects using Git - Advocating for best practices in BI development, testing, and documentation You should have: - Proven experience with Looker and deep expertise in LookML syntax and functionality - Hands-on experience building and maintaining LookML data models, explores, dimensions, and measures - Strong SQL skills, including complex joins, aggregations, and performance tuning - Experience working with semantic layers and data modeling for analytics - Solid understanding of data analysis and visualization best practices - Ability to create clear, concise, and impactful dashboards and visualizations - Strong problem-solving skills and attention to detail in debugging Looker models and queries - Familiarity with Looker's security features and data governance principles - Experience using version control systems, preferably Git - Excellent communication skills and the ability to work cross-functionally - Familiarity with modern data warehousing platforms (e.g., Snowflake, BigQuery, Redshift) - Experience migrating from legacy BI tools (e.g., Tableau, Power BI, etc.) to Looker - Experience working in agile data teams and managing BI projects - Familiarity with dbt or other data transformation frameworks At Frequence, you will be part of a dynamic, diverse, innovative, and friendly work environment that values creativity and collaboration. The company embraces differences and believes they drive creativity and innovation. The team consists of individuals from varied backgrounds who are all trail-blazing team players, thinking big and aiming to make a significant impact. Please note that third-party recruiting agencies will not be involved in this search.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies