Jobs
Interviews

8 Etlelt Design Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Data Engineer with 3+ years of experience, your role will involve building and managing data pipelines, data models, and cloud-based analytics solutions on Google Cloud Platform (GCP). You should have expertise in GCP-native services such as BigQuery, Dataflow, Pub/Sub, Cloud Storage, and Composer. Your responsibilities will include collaborating with data scientists, analysts, and business teams to ensure scalable and high-performance data infrastructure. Key Responsibilities: - Design, build, and optimize scalable ETL/ELT pipelines using GCP services (Dataflow, Composer, Pub/Sub, BigQuery). - Ingest structured and unstructured data from multiple sources into cloud storage and warehouse solutions. - Develop efficient and optimized data models in BigQuery. - Implement partitioning, clustering, and query optimization for performance and cost efficiency. - Work with GCP-native tools for data integration, storage, and analytics. - Ensure high availability, security, and compliance of data systems. - Partner with data scientists, analysts, and application teams to deliver reliable datasets. - Troubleshoot issues, monitor pipelines, and ensure SLAs for data delivery. - Implement data quality checks, monitoring, and lineage tracking. - Follow DevOps and CI/CD practices for data engineering projects. Required Skills & Qualifications: - 3+ years of hands-on experience in data engineering. - Strong expertise in GCP data services such as BigQuery, Dataflow, Cloud Composer, Pub/Sub, Cloud Storage, and Dataproc. - Proficiency in SQL and Python for data transformation and automation. - Good knowledge of ETL/ELT design, data modeling, and warehousing principles. - Experience with Git, CI/CD pipelines, and version control. - Understanding of data security, IAM, and compliance in cloud environments. Nice-to-Have Skills: - Familiarity with Terraform/Deployment Manager for Infrastructure as Code (IaC). - Knowledge of Kafka, Spark, or DBT for advanced data engineering use cases.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Data Engineer, you will play a crucial role in architecting end-to-end data platforms on Azure, utilizing Azure Databricks, ADF, and ADLS Gen2. Your primary focus will involve defining frameworks for data ingestion, transformation, and orchestration, ensuring efficient and scalable Cloud-native ETL/ELT design. Your expertise in SQL and Python will be essential in data modeling and optimization, contributing to the establishment of data quality and governance guidelines. Additionally, you will lead the design of data Lakehouse solutions encompassing both batch and streaming data processing. Collaborating with cross-functional teams, you will implement CI/CD practices and performance tuning strategies to enhance the overall data platform efficiency. Your innovative approach will be instrumental in shaping the future of data architecture within the organization.,

Posted 4 days ago

Apply

9.0 - 13.0 years

0 Lacs

karnataka

On-site

As a Principal Data Engineer at Autodesk, you will play a crucial role in the Product Access and Compliance team, which focuses on identifying and exposing non-compliant users of Autodesk software. Your main responsibility will be to develop best practices and make architectural decisions to enhance data processing & analytics pipelines. This is a significant strategic initiative for the company, and your contributions will directly impact the platforms" reliability, resiliency, and scalability. You should be someone who excels in autonomy and has a proven track record of driving long-term projects to completion. Your attention to detail and commitment to quality will be essential in ensuring the success of the data infrastructure at Autodesk. Working with a tech stack that includes Airflow, Hive, EMR, PySpark, Presto, Jenkins, Snowflake, Datadog, and various AWS services, you will collaborate closely with the Sr. Manager, Software Development in a hybrid position based in Bengaluru. Your responsibilities will include modernizing and expanding existing systems, understanding business requirements to architect scalable solutions, developing CI/CD ETL pipelines, and owning data quality within your areas of expertise. You will also lead the design and implementation of complex data processing systems, provide mentorship to junior engineers, and ensure alignment with project goals and timelines. To qualify for this role, you should hold a Bachelor's degree with at least 9 years of relevant industry experience in big data systems, data processing, and SQL data warehouses. You must have hands-on experience with large Hadoop projects, PySpark, and optimizing Spark applications. Strong programming skills in Python & SQL, knowledge of distributed data processing systems, and experience with public cloud platforms like AWS are also required. Additionally, familiarity with SQL, data modeling techniques, ETL/ELT design, workflow management tools, and BI tools will be advantageous. At Autodesk, we are committed to creating a diverse and inclusive workplace where everyone can thrive. If you are passionate about leveraging data to drive meaningful impact and want to be part of a dynamic team that fosters continuous learning and improvement, we encourage you to apply and join us in shaping a better future for all.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 - 0 Lacs

pune, maharashtra

On-site

As a Senior Azure Data Engineer at Keyrus, you will play a crucial role in connecting data engineering with front-end development. You will collaborate closely with Data Scientists and UI Developers (React.js) to develop, construct, and safeguard data services that support a cutting-edge platform. This role is hands-on and collaborative, requiring extensive experience in the Azure data ecosystem, API development, and contemporary DevOps practices. Your primary responsibilities will include building and managing scalable Azure data pipelines (ADF, Synapse, Databricks, DBT) to cater to dynamic frontend interfaces. You will also be responsible for establishing API access layers to expose data to front-end applications and external services. Collaborating with the Data Science team to operationalize models and insights, as well as working directly with React JS developers to facilitate UI data integration, will be part of your daily tasks. Ensuring data security, integrity, and monitoring across systems, implementing and maintaining CI/CD pipelines for seamless deployment, and automating and managing cloud infrastructure using Terraform, Kubernetes, and Azure App Services will also be key responsibilities. Additionally, you will support data migration initiatives from legacy infrastructure to modern platforms like Mesh and Finbourne, refactor legacy pipelines with code reuse, version control, and infrastructure-as-code best practices, and analyze, map, and document financial data models across various systems. In our ideal candidate, we are seeking someone with 8+ years of experience in data engineering, particularly focusing on the Azure ecosystem (ADF, Synapse, Databricks, App Services). You should have a proven track record of developing and hosting secure, scalable REST APIs, as well as experience in supporting cross-functional teams, especially front-end/UI and data science groups. Hands-on experience with Terraform, Kubernetes (Azure AKS), CI/CD, and cloud automation is essential. Strong expertise in ETL/ELT design, performance tuning, and pipeline monitoring is highly desired. Proficiency in Python, SQL, and Java, along with knowledge of data security practices, governance, and compliance (e.g., GDPR) is necessary. Familiarity with big data tools (e.g., Spark, Kafka), version control (Git), and testing frameworks for data pipelines is also preferred. Excellent communication skills and the ability to convey technical concepts to diverse stakeholders are required. Joining Keyrus means becoming part of a leading company in the Data Intelligence field and a prominent player in Management Consultancy and Digital Experience. You will be part of a dynamic and continually learning organization with an established international network of thought-leading professionals committed to bridging the gap between innovation and business. Keyrus offers you the chance to demonstrate your skills and potential, gain experience through client interaction, and grow based on your capabilities and interests in a vibrant and supportive environment. Additionally, Keyrus UK provides competitive holiday allowance, a comprehensive Private Medical Plan, flexible working patterns, a Workplace Pension Scheme, Sodexo Lifestyle Benefits, a Discretionary Bonus Scheme, a Referral Bonus Scheme, and Training & Development via KLX (Keyrus Learning Experience).,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As an experienced Azure Data Engineer with 5 to 7 years of experience, you will be responsible for designing, developing, and managing end-to-end data pipelines on Azure Cloud using Microsoft Fabric. Your expertise in Azure Synapse, Data Lake, Data Factory, Databricks, and Power BI will be crucial for seamless data integration and reporting. You will play a key role in implementing data ingestion, transformation, and modeling pipelines supporting structured and unstructured data. Collaboration with data analysts, business stakeholders, and other engineers will be essential to define data requirements and deliver scalable data solutions. Your focus on optimizing data performance and cost through efficient architecture and coding practices will contribute to the success of the projects. Ensuring data security, privacy, and compliance with organizational policies will be part of your responsibilities. Monitoring, troubleshooting, and improving data workflows for reliability and performance will also be a key aspect of your role. Your required skills include 5 to 7 years of experience as a Data Engineer, with at least 2+ years on the Azure Data Stack. Hands-on experience with Microsoft Fabric, Azure Synapse Analytics, Data Factory, Data Lake, SQL Server, and Power BI integration is essential. Strong knowledge in data modeling, ETL/ELT design, and performance tuning, as well as proficiency in SQL and Python/PySpark scripting are required. Experience with CI/CD pipelines and DevOps practices for data solutions is also necessary. Understanding data governance, security, and compliance frameworks is crucial, along with excellent communication, problem-solving, and stakeholder management skills. A Bachelor's or Masters degree in Computer Science, Data Engineering, or a related field is preferred. Nice to have qualifications include Microsoft Azure Data Engineer Certification (DP-203), experience in Real-Time Streaming (e.g., Azure Stream Analytics or Event Hub), and exposure to Power BI semantic models and direct lake mode in Microsoft Fabric. Join us to work with the latest in Microsoft's modern data stack - Microsoft Fabric, collaborate with a team of passionate data professionals, work on enterprise-grade, large-scale data projects, experience a fast-paced, learning-focused work environment, and have immediate visibility and impact in key business decisions.,

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As an Azure Data Engineer with expertise in Microsoft Fabric and modern data platform components, you will be responsible for designing, developing, and managing end-to-end data pipelines on Azure Cloud. Your primary focus will be on ensuring performance, scalability, and delivering business value through efficient data solutions. You will collaborate with various teams to define data requirements, implement data ingestion, transformation, and modeling pipelines supporting structured and unstructured data. Additionally, you will work with Azure Synapse, Data Lake, Data Factory, Databricks, and Power BI for seamless data integration and reporting. Your role will involve optimizing data performance and cost through efficient architecture and coding practices, ensuring data security, privacy, and compliance with organizational policies. Monitoring, troubleshooting, and improving data workflows for reliability and performance will also be part of your responsibilities. To excel in this role, you should have 5 to 7 years of experience as a Data Engineer, with at least 2+ years working on the Azure Data Stack. Hands-on experience with Microsoft Fabric, Azure Synapse Analytics, Data Factory, Data Lake, SQL Server, and Power BI integration is crucial. Strong skills in data modeling, ETL/ELT design, and performance tuning are required, along with proficiency in SQL and Python/PySpark scripting. Experience with CI/CD pipelines and DevOps practices for data solutions, understanding of data governance, security, and compliance frameworks, as well as excellent communication, problem-solving, and stakeholder management skills are essential for success in this role. A Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field is preferred. Having Microsoft Azure Data Engineer Certification (DP-203), experience in Real-Time Streaming (e.g., Azure Stream Analytics or Event Hub), and exposure to Power BI semantic models and direct lake mode in Microsoft Fabric would be advantageous. Join us to work with the latest in Microsoft's modern data stack - Microsoft Fabric, collaborate with a team of passionate data professionals, work on enterprise-grade, large-scale data projects, experience a fast-paced, learning-focused work environment, and have immediate visibility and impact in key business decisions.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Big Data Engineer, you will be responsible for expanding and optimizing the data and database architecture, as well as optimizing data flow and collection for cross-functional teams. You should be an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building. Your role will involve supporting software developers, database architects, data analysts, and data scientists on data initiatives, ensuring optimal data delivery architecture is consistent throughout ongoing projects. You must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. You should have sound knowledge in Spark architecture and distributed computing, including Spark streaming. Proficiency in Spark, including RDD and DataFrames core functions, troubleshooting, and performance tuning is essential. A good understanding of object-oriented concepts and hands-on experience with Scala/Java/Kotlin, along with excellent programming logic and technique, is required. Additionally, experience in functional programming and OOPS concepts in Scala/Java/Kotlin is necessary. Your responsibilities will include managing a team of Associates and Senior Associates, ensuring proper utilization across projects, and mentoring new members for project onboarding. You should be able to understand client requirements, design, develop, and deliver solutions from scratch. Experience in AWS cloud would be preferable, along with the ability to analyze, re-architect, and re-platform on-premises data warehouses to data platforms on the cloud. Leading client calls to address delays, blockers, escalations, and requirements collation, managing project timing, client expectations, and meeting deadlines are key aspects of the role. Project and team management roles, facilitating meetings within the team regularly, understanding business requirements, analyzing different approaches, and planning deliverables and milestones for projects are also part of your responsibilities. Optimization, maintenance, and support of pipelines, strong analytical and logical skills, and the ability to tackle new challenges comfortably and learn are essential qualities for this role. The ideal candidate should have 4 to 7 years of relevant experience. Must-have skills for this position include Scala/Java/Kotlin, Spark, SQL (Intermediate to advanced level), Spark Streaming, any cloud platform (AWS preferred), Kafka/Kinesis/any streaming services, Object-Oriented Programming, Hive, and ETL/ELT design experience, as well as CICD experience for ETL pipeline deployment. Good-to-have skills include proficiency in Git or similar version control tools, knowledge in CI/CD, and Microservices.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity As a Senior BI Consultant, you will be responsible for supporting and enhancing Business Intelligence and Data Analytics platforms with a primary focus on Power BI and Databricks. You will work across global engagements, helping clients translate complex data into actionable insights. This role involves day-to-day application management, dashboard development, troubleshooting, and stakeholder collaboration to ensure high data quality, performance, and availability. Your Key Responsibilities BI Support & Monitoring: Provide daily application support for Power BI dashboards and Databricks pipelines, resolving incidents, fulfilling service requests, and implementing enhancements. Dashboard Development: Design, develop, and maintain Power BI reports and data models tailored to evolving business requirements. Root Cause Analysis: Investigate and resolve data/reporting issues, bugs, and performance bottlenecks through detailed root cause analysis. Requirement Gathering: Collaborate with business users and technical stakeholders to define BI requirements and translate them into scalable solutions. Documentation: Maintain technical documentation, including data flows, dashboard usage guides, and QA test scripts. On-Call & Shift Support: Participate in shift rotations and be available for on-call support for critical business scenarios. Integration & Data Modeling: Ensure effective data integration from diverse systems and maintain clean, performant data models within Power BI and Databricks. Skills and attributes for success Hands-on expertise in Power BI, including DAX, data modeling, and report optimization Working experience in Databricks, especially with Delta Lake, SQL, and PySpark for data transformation Familiarity with ETL/ELT design, especially within Azure data ecosystems Ability to troubleshoot BI performance issues and manage service tickets efficiently Strong communication skills to interact with global stakeholders and cross-functional teams Ability to manage and prioritize multiple support tasks in a fast-paced environment To qualify for the role, you must have 3-7 years of experience in Business Intelligence and Application Support Strong hands-on skills in Power BI and Databricks, preferably in a global delivery model Working knowledge of ETL processes, data validation, and performance tuning Familiarity with ITSM practices for service request, incident, and change management Willingness to work in rotational shifts and support on-call requirements Bachelor's degree in Computer Science, Engineering, or equivalent work experience Willingness to work in a 24x7 rotational shift-based support environment. No location constraints Technologies and Tools Must haves Power BI: Expertise in report design, data modeling, and DAX Databricks: Experience with notebooks, Delta Lake, SQL, and PySpark Azure Ecosystem: Familiarity with Azure Data Lake and Azure Synapse (consumer layer) ETL & Data Modelling: Good understanding of data integration and modeling best practices ITSM Tools: Experience with ServiceNow or equivalent for ticketing and change management Good to have Data Integration: Experience integrating with ERP, CRM, or POS systems Python: For data transformation and automation scripting Monitoring: Awareness of Azure Monitor or Log Analytics for pipeline health Certifications: Microsoft Certified Data Analyst Associate or Databricks Certified Data Engineer Associate Industry Exposure: Experience in retail or consumer goods industries What we look for People with client orientation, experience and enthusiasm to learn new things in this fast-moving environment. An opportunity to be a part of a market-leading, multi-disciplinary team of hundreds of professionals. Opportunities to work with EY BI application maintenance, practices globally with leading businesses across a range of industries. What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations - Argentina, China, India, the Philippines, Poland and the UK - and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We'll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You'll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We'll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We'll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You'll be embraced for who you are and empowered to use your voice to help others find theirs. About EY EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. If you can demonstrate that you meet the criteria above, please contact us as soon as possible. The exceptional EY experience. It's yours to build. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies