Jobs
Interviews

37 Etlelt Pipelines Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

About Forma.ai: Forma.ai is a Series B startup that is revolutionizing how sales compensation is designed, managed, and optimized. Handling billions in annual managed commissions for market leaders like Edmentum, Stryker, and Autodesk, our growth is fueled by a deep passion for fundamentally changing and shaping how companies utilize sales intelligence to drive business strategy. We are looking for equally driven individuals who are excited about creating something big! What You'll Be Doing: Reporting to the VP of Analytics & Data Science, the Analytics Manager will be involved in new customer implementations and will assist in turning business requirements into code. Working with diverse data from customers across various industries, you will build and optimize code for big data pipelines, architectures, and data sets. Collaboration with stakeholders within the business to identify areas where additional value can be brought, while managing competing priorities will be a key responsibility. Additionally, you will be responsible for managing a team of Analytics Engineers and guiding their personal development. What We're Looking For: - 5+ years of experience in Python development, SQL, and Databricks/PySpark. API experience is a bonus. - Strong background in building automated data ingestion and ETL/ELT pipelines. - Excellent communication skills across multiple disciplines and teams. - Deep understanding of building end-to-end customer-facing products with a strong sense of customer empathy. - Ability to thrive in a detail-oriented collaborative environment with Product, Engineers, and Analytics teams. - Eagerness to enhance existing skills and acquire new ones, with a genuine passion for data. - Excitement to experiment with new technologies that enhance the efficiency of current data ingestion processes. Technologies we use: - Backend: Python, Django, Postgres - Infrastructure: AWS, Databricks, GitHub Actions Our Values: - Work well, together: We are real individuals with personal responsibilities, and we believe in working together as a cohesive team. - Be precise. Be relentless: Complacency is not in our vocabulary. We continuously strive for improvement and push each other to set new goals. - Love our tech. Love our customers: Our platform addresses a complex problem in an underserved market. While not everyone is customer-facing, we are all dedicated to our customers. Our Commitment To You: We understand that applying for a new role requires effort, and we encourage you to apply even if your experience does not precisely match the job description. There are diverse paths to a successful career, and we look forward to hearing about yours. We appreciate all applicants for their interest.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a Lead Data Engineer specializing in Databricks, you will play a crucial role in designing, developing, and optimizing our next-generation data platform. Your responsibilities will include leading a team of data engineers, offering technical guidance, mentorship, and ensuring the scalability and high performance of data solutions. You will be expected to lead the design, development, and implementation of scalable and reliable data pipelines using Databricks, Spark, and other relevant technologies. It will also be part of your role to define and enforce data engineering best practices, coding standards, and architectural patterns. Additionally, providing technical guidance and mentorship to junior and mid-level data engineers, conducting code reviews, and ensuring the quality, performance, and maintainability of data solutions will be key aspects of your job. Your expertise in Databricks will be essential as you architect and implement data solutions on the Databricks platform, including Databricks Lakehouse, Delta Lake, and Unity Catalog. Optimizing Spark workloads for performance and cost efficiency on Databricks, developing and managing Databricks notebooks, jobs, and workflows, and proficiently using Databricks features such as Delta Live Tables (DLT), Photon, and SQL Analytics will be part of your daily tasks. In terms of pipeline development and operations, you will need to develop, test, and deploy robust ETL/ELT pipelines for data ingestion, transformation, and loading from various sources like relational databases, APIs, and streaming data. Implementing monitoring, alerting, and logging for data pipelines to ensure operational excellence, as well as troubleshooting and resolving complex data-related issues, will also fall under your responsibilities. Collaboration and communication are crucial aspects of this role as you will work closely with cross-functional teams, including product managers, data scientists, and software engineers. Clear communication of complex technical concepts to both technical and non-technical stakeholders is vital. Staying updated with industry trends and emerging technologies in data engineering and Databricks will also be expected. Key Skills required for this role include extensive hands-on experience with the Databricks platform, including Databricks Workspace, Spark on Databricks, Delta Lake, and Unity Catalog. Strong proficiency in optimizing Spark jobs, understanding Spark architecture, experience with Databricks features like Delta Live Tables (DLT), Photon, and Databricks SQL Analytics, and a deep understanding of data warehousing concepts, dimensional modeling, and data lake architectures are essential for success in this position.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You will be joining our data engineering team as an experienced Python + Databricks Developer. Your role will involve designing, developing, and maintaining scalable data pipelines using Databricks and Apache Spark. You will write efficient Python code for data transformation, cleansing, and analytics. Collaboration with data scientists, analysts, and engineers to understand data needs and deliver high-performance solutions is a key part of this role. Additionally, you will optimize and tune data pipelines for performance and cost efficiency and implement data validation, quality checks, and monitoring. Working with cloud platforms, preferably Azure or AWS, to manage data workflows will also be part of your responsibilities. Ensuring best practices in code quality, version control, and documentation is essential for this role. To be successful in this position, you should have at least 5 years of professional experience in Python development and 3 years of hands-on experience with Databricks, including notebooks, clusters, Delta Lake, and job orchestration. Strong experience with Spark, particularly PySpark, is required. Proficiency in working with large-scale data processing and ETL/ELT pipelines is necessary, along with a solid understanding of data warehousing concepts and SQL. Experience with Azure Data Factory, AWS Glue, or other data orchestration tools would be advantageous. Familiarity with version control tools like Git is also desired. Excellent problem-solving and communication skills are important for this role.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a talented and driven Backend Engineer with a solid understanding of data engineering workflows, you will be responsible for designing robust backend services and contributing to the development and maintenance of high-performance data pipelines. Your expertise in Python (preferably with FastAPI) and working knowledge of both SQL and NoSQL databases will be crucial in this role. You will have the unique opportunity to work at the intersection of API development and data systems, helping to build the infrastructure that powers our data-driven applications. Key Responsibilities: - Design, develop, and maintain backend services using Python and FastAPI - Build and consume RESTful APIs for internal tools and external integrations - Work with SQL and NoSQL databases for efficient data storage and modeling - Develop and manage ETL/ELT data pipelines to handle structured and unstructured data - Collaborate with cross-functional teams to integrate with third-party APIs and data sources - Ensure the scalability, performance, and reliability of backend systems - Participate in code reviews, architectural discussions, and technical design Required Skills: - Proficiency in Python, with experience in FastAPI or similar frameworks - Strong understanding of REST API design and best practices - Experience working with relational (PostgreSQL, MySQL) and non-relational (MongoDB, Redis) databases - Hands-on experience in designing and managing ETL/ELT pipelines - Familiarity with data engineering concepts such as data modeling, transformations, and data integration - Solid understanding of software engineering principles and version control (Git) Preferred Qualifications: - Exposure to cloud platforms like AWS, GCP, or Azure - Familiarity with containerization tools (Docker, Kubernetes) - Experience working in agile teams and CI/CD environments If you are passionate about building scalable systems and enabling data-driven applications, we would love to hear from you.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for designing and implementing scalable Snowflake data warehouse architectures, which includes schema modeling and data partitioning. You will lead or support data migration projects from on-premise or legacy cloud platforms to Snowflake. Additionally, you will be developing ETL/ELT pipelines and integrating data using tools such as DBT, Fivetran, Informatica, Airflow, etc. It will be part of your role to define and implement best practices for data modeling, query optimization, and storage efficiency in Snowflake. Collaboration with cross-functional teams, including data engineers, analysts, BI developers, and stakeholders, to align architectural solutions will be essential. Ensuring data governance, compliance, and security by implementing RBAC, masking policies, and access control within Snowflake will also be a key responsibility. Working with DevOps teams to enable CI/CD pipelines, monitoring, and infrastructure as code for Snowflake environments will be part of your duties. Optimizing resource utilization, monitoring workloads, and managing the cost-effectiveness of the platform will also be under your purview. Staying updated with Snowflake features, cloud vendor offerings, and best practices is crucial. Qualifications & Skills: - Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. - X years of experience in data engineering, data warehousing, or analytics architecture. - 3+ years of hands-on experience in Snowflake architecture, development, and administration. - Strong knowledge of cloud platforms (AWS, Azure, or GCP). - Solid understanding of SQL, data modeling, and data transformation principles. - Experience with ETL/ELT tools, orchestration frameworks, and data integration. - Familiarity with data privacy regulations (GDPR, HIPAA, etc.) and compliance. Qualifications: - Snowflake certification (SnowPro Core / Advanced). - Experience in building data lakes, data mesh architectures, or streaming data platforms. - Familiarity with tools like Power BI, Tableau, or Looker for downstream analytics. - Experience with Agile delivery models and CI/CD workflows.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

bhubaneswar

On-site

The Apache Superset Data Engineer plays a key role in designing, developing, and maintaining scalable data pipelines and analytics infrastructure, with a primary emphasis on data visualization and dashboarding using Apache Superset. This role sits at the intersection of data engineering and business intelligence, enabling stakeholders to access accurate, actionable insights through intuitive dashboards and reports. Core Responsibilities: - Create, customize, and maintain interactive dashboards in Apache Superset to support KPIs, experimentation, and business insights. - Work closely with analysts, BI teams, and business users to gather requirements and deliver effective Superset-based visualizations. - Perform data validation, feature engineering, and exploratory data analysis to ensure data accuracy and integrity. - Analyze A/B test results and deliver insights that inform business strategies. - Establish and maintain standards for statistical testing, data validation, and analytical workflows. - Integrate Superset with various database systems (e.g., MySQL, PostgreSQL) and manage associated drivers and connections. - Ensure Superset deployments are secure, scalable, and high-performing. - Clearly communicate findings and recommendations to both technical and non-technical stakeholders. Required Skills: - Proven expertise in building dashboards and visualizations using Apache Superset. - Strong command of SQL and experience working with relational databases like MySQL or PostgreSQL. - Proficiency in Python (or Java) for data manipulation and workflow automation. - Solid understanding of data modeling, ETL/ELT pipelines, and data warehousing principles. - Excellent problem-solving skills and a keen eye for data quality and detail. - Strong communication skills, with the ability to simplify complex technical concepts for non-technical audiences. - Nice to have familiarity with cloud platforms (AWS, ECS). Qualifications: - Bachelors degree in Computer Science, Engineering, or a related field. - 3+ years of relevant experience.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

bhubaneswar

On-site

As an Apache Superset Data Engineer, you will play a crucial role in the design, development, and maintenance of scalable data pipelines and analytics infrastructure. Your primary focus will be on data visualization and dashboarding using Apache Superset, bridging the gap between data engineering and business intelligence. By creating intuitive dashboards and reports, you will empower stakeholders to access accurate and actionable insights efficiently. Your responsibilities will include creating, customizing, and maintaining interactive dashboards in Apache Superset to support key performance indicators (KPIs), experimentation, and business insights. Collaboration with analysts, BI teams, and business users to gather requirements and deliver effective visualizations will be essential. Additionally, you will conduct data validation, feature engineering, and exploratory data analysis to ensure data accuracy and integrity. Analyzing A/B test results and providing insights to inform business strategies will be part of your role. You will be responsible for establishing and maintaining standards for statistical testing, data validation, and analytical workflows. Integrating Superset with various database systems such as MySQL or PostgreSQL and managing associated drivers and connections will be crucial. Ensuring secure, scalable, and high-performing Superset deployments is also a key aspect of this position. Communication is vital in this role, as you will need to clearly convey findings and recommendations to both technical and non-technical stakeholders. Required skills include proven expertise in building dashboards and visualizations using Apache Superset, a strong command of SQL, experience with relational databases, proficiency in Python (or Java) for data manipulation, and solid understanding of data modeling, ETL/ELT pipelines, and data warehousing principles. Problem-solving skills, attention to data quality and detail, and the ability to simplify complex technical concepts for non-technical audiences are essential. Nice-to-have qualifications include familiarity with cloud platforms like AWS and ECS. To qualify for this position, you should hold a Bachelor's degree in Computer Science, Engineering, or a related field, along with a minimum of 3 years of relevant experience.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a talented and driven Backend Engineer with a solid understanding of data engineering workflows, you will play a crucial role in our team. Your expertise in Python, preferably with FastAPI, along with knowledge of SQL and NoSQL databases, will be instrumental in designing robust backend services and contributing to the development of high-performance data pipelines. You will have the unique opportunity to work at the intersection of API development and data systems, where you will help build the infrastructure supporting our data-driven applications. Your responsibilities will include designing, developing, and maintaining backend services, building and consuming RESTful APIs, and working with both SQL and NoSQL databases for efficient data storage and modeling. Additionally, you will be involved in developing and managing ETL/ELT data pipelines, collaborating with cross-functional teams to integrate third-party APIs and data sources, and ensuring the scalability, performance, and reliability of backend systems. Your participation in code reviews, architectural discussions, and technical design will be invaluable to the team. To excel in this role, you should possess proficiency in Python, experience in FastAPI or similar frameworks, a strong understanding of REST API design and best practices, and hands-on experience with relational and non-relational databases. Familiarity with data engineering concepts, software engineering principles, and version control is also essential. Preferred qualifications include exposure to cloud platforms like AWS, GCP, or Azure, familiarity with containerization tools such as Docker and Kubernetes, and experience working in agile teams and CI/CD environments. If you are passionate about building scalable systems and enabling data-driven applications, we are excited to hear from you.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer specializing in Snowflake architecture, you will be responsible for designing and implementing scalable data warehouse architectures, including schema modeling and data partitioning. Your role will involve leading or supporting data migration projects to Snowflake from on-premise or legacy cloud platforms. You will be developing ETL/ELT pipelines and integrating data using various tools such as DBT, Fivetran, Informatica, and Airflow. It will be essential to define and implement best practices for data modeling, query optimization, and storage efficiency within Snowflake. Collaboration with cross-functional teams, including data engineers, analysts, BI developers, and stakeholders, will be crucial to align architectural solutions effectively. Ensuring data governance, compliance, and security by implementing RBAC, masking policies, and access control within Snowflake will also be part of your responsibilities. Working closely with DevOps teams to enable CI/CD pipelines, monitoring, and infrastructure as code for Snowflake environments is essential. Your role will involve optimizing resource utilization, monitoring workloads, and managing the cost-effectiveness of the platform. Staying updated with Snowflake features, cloud vendor offerings, and best practices will be necessary to drive continuous improvement in data architecture. Qualifications & Skills: - Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. - 5+ years of experience in data engineering, data warehousing, or analytics architecture. - 3+ years of hands-on experience in Snowflake architecture, development, and administration. - Strong knowledge of cloud platforms such as AWS, Azure, or GCP. - Solid understanding of SQL, data modeling, and data transformation principles. - Experience with ETL/ELT tools, orchestration frameworks, and data integration. - Familiarity with data privacy regulations (GDPR, HIPAA, etc.) and compliance. Additional Qualifications: - Snowflake certification (SnowPro Core / Advanced). - Experience in building data lakes, data mesh architectures, or streaming data platforms. - Familiarity with tools like Power BI, Tableau, or Looker for downstream analytics. - Experience with Agile delivery models and CI/CD workflows. This role offers an exciting opportunity to work on cutting-edge data architecture projects and collaborate with diverse teams to drive impactful business outcomes.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You are an experienced Python + Databricks Developer who will be a valuable addition to our data engineering team. Your expertise in Python programming, data processing, and hands-on experience with Databricks will be instrumental in building and optimizing data pipelines. Your key responsibilities will include designing, developing, and maintaining scalable data pipelines using Databricks and Apache Spark. You will be expected to write efficient Python code for data transformation, cleansing, and analytics. Collaboration with data scientists, analysts, and engineers is essential to understand data needs and deliver high-performance solutions. Optimizing and tuning data pipelines for performance and cost efficiency, implementing data validation, quality checks, and monitoring, as well as working with cloud platforms (preferably Azure or AWS) to manage data workflows are crucial aspects of the role. Ensuring best practices in code quality, version control, and documentation will also be part of your responsibilities. To be successful in this role, you should have 5+ years of professional experience in Python development and at least 3 years of hands-on experience with Databricks, including notebooks, clusters, Delta Lake, and job orchestration. Strong experience with Spark, especially PySpark, is required. Proficiency in working with large-scale data processing and ETL/ELT pipelines, solid understanding of data warehousing concepts and SQL, as well as experience with Azure Data Factory, AWS Glue, or other data orchestration tools will be beneficial. Familiarity with version control tools like Git and excellent problem-solving and communication skills are also essential. If you are looking to leverage your Python and Databricks expertise to contribute to building robust data pipelines and optimizing data workflows, this role is a great fit for you.,

Posted 1 month ago

Apply

3.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

We are seeking a highly skilled and experienced Snowflake Architect to take charge of designing, developing, and deploying enterprise-grade cloud data solutions. As the ideal candidate, you should possess a robust background in data architecture, cloud data platforms, and Snowflake implementation. Hands-on experience in end-to-end data pipeline and data warehouse design is essential for this role. Your responsibilities will include leading the architecture, design, and implementation of scalable Snowflake-based data warehousing solutions. You will be tasked with defining data modeling standards, best practices, and governance frameworks. Designing and optimizing ETL/ELT pipelines using tools such as Snowpipe, Azure Data Factory, Informatica, or DBT will be a key aspect of your role. Collaboration with stakeholders to understand data requirements and translating them into robust architectural solutions will also be expected. Additionally, you will be responsible for implementing data security, privacy, and role-based access controls within Snowflake. Guiding development teams on performance tuning, query optimization, and cost management in Snowflake is crucial. Ensuring high availability, fault tolerance, and compliance across data platforms will also fall under your purview. Mentoring developers and junior architects on Snowflake capabilities is another important aspect of this role. In terms of Skills & Experience, we are looking for candidates with at least 8+ years of overall experience in data engineering, BI, or data architecture, and a minimum of 3+ years of hands-on Snowflake experience. Expertise in Snowflake architecture, data sharing, virtual warehouses, clustering, and performance optimization is highly desirable. Strong proficiency in SQL, Python, and cloud data services (e.g., AWS, Azure, or GCP) is required. Hands-on experience with ETL/ELT tools like ADF, Informatica, Talend, DBT, or Matillion is also necessary. A good understanding of data lakes, data mesh, and modern data stack principles is preferred. Experience with CI/CD for data pipelines, DevOps, and data quality frameworks is a plus. Solid knowledge of data governance, metadata management, and cataloging is beneficial. Preferred qualifications include holding a Snowflake certification (e.g., SnowPro Core/Advanced Architect), familiarity with Apache Airflow, Kafka, or event-driven data ingestion, knowledge of data visualization tools such as Power BI, Tableau, or Looker, and experience in healthcare, BFSI, or retail domain projects. If you meet these requirements and are ready to take on a challenging and rewarding role as a Snowflake Architect, we encourage you to apply.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Microsoft Azure Engineer in Bangalore (Hybrid) with 5+ years of experience, you will be responsible for building and optimizing cloud solutions on Microsoft Azure. Your expertise in Azure Synapse, Azure Data Factory, and related cloud technologies will be crucial in ensuring scalability, security, and automation. Your key responsibilities will include: Cloud Data Engineering & Processing: - Designing and optimizing ETL/ELT pipelines using Azure Synapse and Data Factory. - Developing and managing data pipelines, data lakes, and workflows within the Azure ecosystem. - Implementing data security, governance, and compliance best practices. Backend & Application Development: - Developing scalable cloud applications using Azure Functions, Service Bus, and Event Grid. - Building RESTful APIs and microservices for cloud-based data processing. - Integrating Azure services to enhance data accessibility and processing. Cloud & DevOps: - Deploying and managing solutions using Azure DevOps, CI/CD, and Infrastructure as Code (Terraform, Bicep). - Optimizing cloud costs and ensuring high availability of data platforms. - Implementing logging, monitoring, and security best practices. Required Skills & Experience: - 5+ years of experience in Azure cloud engineering and development. - Strong expertise in Azure Synapse, Data Factory, and Microsoft Fabric. - Proficiency in CI/CD, Azure DevOps, and related tools. - Experience with Infrastructure as Code (Terraform, Bicep). - Hands-on knowledge of Azure Functions, Service Bus, Event Grid, and API development. - Familiarity with SQL, T-SQL, Cosmos DB, and relational databases. - Strong experience in data security and compliance. Preferred Skills (Good to Have): - Knowledge of Databricks, Python, and ML models for data processing. - Familiarity with event-driven architectures (Kafka, Event Hubs). - Azure certifications (e.g., DP-203, AZ-204). Apply now if you are ready to leverage your expertise in Microsoft Azure to contribute to building robust cloud solutions and optimizing data processing workflows.,

Posted 1 month ago

Apply
Page 2 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies