Jobs
Interviews

254 Etl Pipelines Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 5.0 years

6 - 7 Lacs

Karnataka

Work from Office

Develop and manage ETL pipelines using Python. Responsible for transforming and loading data efficiently from source to destination systems, ensuring clean and accurate data.

Posted 2 months ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Karnataka

Work from Office

Focus on designing, developing, and maintaining Snowflake data environments. Responsible for data modeling, ETL pipelines, and query optimization to ensure efficient and secure data processing.

Posted 2 months ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Kolkata

Work from Office

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your role Develop and maintain data pipelines tailored to Azure environments, ensuring security and compliance with client data standards. Collaborate with cross-functional teams to gather data requirements, translate them into technical specifications, and develop data models. Leverage Python libraries for data handling, enhancing processing efficiency and robustness. Ensure SQL workflows meet client performance standards and handle large data volumes effectively. Build and maintain reliable ETL pipelines, supporting full and incremental loads and ensuring data integrity and scalability in ETL processes. Implement CI/CD pipelines for automated deployment and testing of data solutions. Optimize and tune data workflows and processes to ensure high performance and reliability. Monitor, troubleshoot, and optimize data processes for performance and reliability. Document data infrastructure, workflows, and maintain industry knowledge in data engineering and cloud tech. Your Profile Bachelors degree in computer science, Information Systems, or a related field 4+ years of data engineering experience with a strong focus on Azure data services for client-centric solutions. Extensive expertise in Azure Synapse, Data Lake Storage, Data Factory, Databricks, and Blob Storage, ensuring secure, compliant data handling for clients. Good interpersonal communication skills Skilled in designing and maintaining scalable data pipelines tailored to client needs in Azure environments. Proficient in SQL and PL/SQL for complex data processing and client-specific analytics. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 2 months ago

Apply

6.0 - 10.0 years

10 - 15 Lacs

Chennai, Bengaluru

Work from Office

Responsibilities: Azure Data Factory – design ETL processes and create complex pipelines SQL / T-SQL Python XML, Json, Excel, CSV data bricks, data lake, synapse Azure DLS Gen 2 Azure Data Factory working with Graph Databases

Posted 2 months ago

Apply

7.0 - 9.0 years

25 - 35 Lacs

Pune

Hybrid

Warm Greetings from Dataceria Software Solutions Pvt Ltd We are Looking For: Senior Azure Data Engineer Domain : BFSI Immediate joiners Send your resumes to carrers@dataceria.com ------------------------------------------------------------------------------------------------------------------------------------------------- As a Senior Azure Data Engineer , you will play a pivotal role in bridging data engineering with front-end development. You willll work closely with Data Scientists and UI Developers (React.js) to design, build, and secure data services that power a next-generation platform. This is a hands-on, collaborative role requiring deep experience across the Azure data ecosystem, API development, and modern DevOps practices. Your Responsibilities Will Include: Building and maintaining scalable Azure data pipelines ( ADF, Synapse, Databricks, DBT) to serve dynamic frontend interfaces. Creating API access layers to expose data to front-end applications and external services. Collaborating with the Data Science team to operationalize models and insights. Working directly with React JS developers to support UI data integration. Ensuring data security , integrity , and monitoring across systems. Implementing and maintaining CI/CD pipelines for seamless deployment. Automating and managing cloud infrastructure using Terraform, Kubernetes, and Azure App Services . Supporting data migration initiatives from legacy infrastructure to modern platforms like Data Mesh Refactoring legacy pipelines with code reuse, version control, and infrastructure-as-code best practices. Analyzing, mapping, and documenting financial data models across various systems. What Were Looking For: 8+ years of experience in data engineering, with a strong focus on the Azure ecosystem (ADF, Synapse, Databricks, App Services). Proven ability to develop and host secure, scalable REST APIs . Experience supporting cross-functional teams, especially front-end/UI and data science groups is a plus. Hands-on experience with Terraform, Kubernetes (Azure EKS), CI/CD, and cloud automation. Strong expertise in ETL/ELT design , performance tuning, and pipeline monitoring . Solid command of Python, SQL , and optionally Scala, Java, or PowerShell. Knowledge of data security practices, governance, and compliance (e.g., GDPR) . Familiarity with big data tools (e.g., Spark, Kafka ), version control (Git), and testing frameworks for data pipelines. Excellent communication skills and the ability to explain technical concepts to diverse stakeholders. Role & responsibilities ---------------------------------------------------------------------------------------------------------------------------------------------- Joining: Immediate Work location: Pune (hybrid) , Open Positions: Senior Azure Data Engineer, If interested, please share your updated resume to carrers@dataceria.com: We welcome applications from skilled candidates who are open to working in a hybrid model. Candidates with less experience but strong technical abilities are also encouraged to apply. ----------------------------------------------------------------------------------------------------- Dataceria Software Solutions Pvt Ltd Follow our LinkedIn for more job openings : https://www.linkedin.com/company/dataceria/ Email : careers@dataceria.com

Posted 2 months ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Guwahati

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred

Posted 2 months ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Kochi

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred

Posted 2 months ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Kanpur

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred

Posted 2 months ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Mumbai

Work from Office

#JobOpening Data Engineer (Contract | 6 Months) Location: Hyderabad | Chennai | Remote Flexibility Possible Type: Contract | Duration: 6 Months We are seeking an experienced Data Engineer to join our team for a 6-month contract assignment. The ideal candidate will work on data warehouse development, ETL pipelines, and analytics enablement using Snowflake, Azure Data Factory (ADF), dbt, and other tools. This role requires strong hands-on experience with data integration platforms, documentation, and pipeline optimizationespecially in cloud environments such as Azure and AWS. #KeyResponsibilities Build and maintain ETL pipelines using Fivetran, dbt, and Azure Data Factory Monitor and support production ETL jobs Develop and maintain data lineage documentation for all systems Design data mapping and documentation to aid QA/UAT testing Evaluate and recommend modern data integration tools Optimize shared data workflows and batch schedules Collaborate with Data Quality Analysts to ensure accuracy and integrity of data flows Participate in performance tuning and improvement recommendations Support BI/MDM initiatives including Data Vault and Data Lakes #RequiredSkills 7+ years of experience in data engineering roles Strong command of SQL, with 5+ years of hands-on development Deep experience with Snowflake, Azure Data Factory, dbt Strong background with ETL tools (Informatica, Talend, ADF, dbt, etc.) Bachelor's in CS, Engineering, Math, or related field Experience in healthcare domain (working with PHI/PII data) Familiarity with scripting/programming (Python, Perl, Java, Linux-based environments) Excellent communication and documentation skills Experience with BI tools like Power BI, Cognos, etc. Organized, self-starter with strong time-management and critical thinking abilities #NiceToHave Experience with Data Lakes and Data Vaults QA & UAT alignment with clear development documentation Multi-cloud experience (especially Azure, AWS) #ContractDetails Role: Data Engineer Contract Duration: 6 Months Location Options: Hyderabad / Chennai (Remote flexibility available)

Posted 2 months ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Varanasi

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred

Posted 2 months ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Agra

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.

Posted 2 months ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Surat

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred.

Posted 2 months ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Ludhiana

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred

Posted 2 months ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Coimbatore

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred

Posted 2 months ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

Jaipur

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred

Posted 2 months ago

Apply

1.0 - 5.0 years

9 - 13 Lacs

Bengaluru

Work from Office

We are looking for a skilled and experienced PySpark Tech Lead to join our dynamic engineering team In this role, you will lead the development and execution of high-performance big data solutions using PySpark You will work closely with data scientists, engineers, and architects to design and implement scalable data pipelines and analytics solutions. As a Tech Lead, you will mentor and guide a team of engineers, ensuring the adoption of best practices for building robust and efficient systems while driving innovation in the use of data technologies. Key Responsibilities Lead and DevelopDesign and implement scalable, high-performance data pipelines and ETL processes using PySpark on distributed systems Tech LeadershipProvide technical direction and leadership to a team of engineers, ensuring the delivery of high-quality solutions that meet both business and technical requirements. Architect SolutionsDevelop and enforce best practices for architecture, design, and coding standards Lead the design of complex data engineering workflows, ensuring they are optimized for performance and cost-effectiveness. CollaborationCollaborate with data scientists, analysts, and other stakeholders to understand data requirements, translating them into scalable technical solutions. Optimization & Performance TuningOptimize large-scale data processing pipelines to improve efficiency and performance Implement best practices for memory management, data partitioning, and parallelization in Spark. Code Review & MentorshipConduct code reviews to ensure high-quality code, maintainability, and scalability Provide guidance and mentorship to junior and mid-level engineers. Innovation & Best PracticesStay current on new data technologies and trends, bringing fresh ideas and solutions to the team Implement continuous integration and deployment pipelines for data workflows. Problem SolvingIdentify bottlenecks, troubleshoot, and resolve issues related to data quality, pipeline failures, and performance optimization. Skills And Qualifications Experience: 7+ years of hands-on experience in PySpark and large-scale data processing. Technical Expertise: Strong knowledge of PySpark, Spark SQL, and Apache Kafka. Experience with cloud platforms like AWS (EMR, S3), Google Cloud, or Azure. In-depth understanding of distributed computing, parallel processing, and data engineering principles. Data Engineering: Expertise in building ETL pipelines, data wrangling, and working with structured and unstructured data. Experience with databases (relational and NoSQL) such as SQL, MongoDB, or DynamoDB. Familiarity with data warehousing solutions and query optimization techniques Leadership & Communication: Proven ability to lead a technical team, make key architectural decisions, and mentor junior engineers. Excellent communication skills, with the ability to collaborate effectively with cross-functional teams and stakeholders. Problem Solving: Strong analytical skills with the ability to solve complex problems involving large datasets and distributed systems. Education: Bachelors or Masters degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Show more Show less

Posted 2 months ago

Apply

7.0 - 9.0 years

25 - 35 Lacs

Chennai, Bengaluru

Hybrid

Warm Greetings from Dataceria Software Solutions Pvt Ltd We are Looking For: Senior Azure Data Engineer Domain : BFSI ------------------------------------------------------------------------------------------------------------------------------------------------- As a Senior Azure Data Engineer , you will play a pivotal role in bridging data engineering with front-end development. You willll work closely with Data Scientists and UI Developers (React.js) to design, build, and secure data services that power a next-generation platform. This is a hands-on, collaborative role requiring deep experience across the Azure data ecosystem, API development, and modern DevOps practices. Your Responsibilities Will Include: Building and maintaining scalable Azure data pipelines ( ADF, Synapse, Databricks, DBT) to serve dynamic frontend interfaces. Creating API access layers to expose data to front-end applications and external services. Collaborating with the Data Science team to operationalize models and insights. Working directly with React JS developers to support UI data integration. Ensuring data security , integrity , and monitoring across systems. Implementing and maintaining CI/CD pipelines for seamless deployment. Automating and managing cloud infrastructure using Terraform, Kubernetes, and Azure App Services . Supporting data migration initiatives from legacy infrastructure to modern platforms like Data Mesh Refactoring legacy pipelines with code reuse, version control, and infrastructure-as-code best practices. Analyzing, mapping, and documenting financial data models across various systems. What Were Looking For: 8+ years of experience in data engineering, with a strong focus on the Azure ecosystem (ADF, Synapse, Databricks, App Services). Proven ability to develop and host secure, scalable REST APIs . Experience supporting cross-functional teams, especially front-end/UI and data science groups is a plus. Hands-on experience with Terraform, Kubernetes (Azure EKS), CI/CD, and cloud automation. Strong expertise in ETL/ELT design , performance tuning, and pipeline monitoring . Solid command of Python, SQL , and optionally Scala, Java, or PowerShell. Knowledge of data security practices, governance, and compliance (e.g., GDPR) . Familiarity with big data tools (e.g., Spark, Kafka ), version control (Git), and testing frameworks for data pipelines. Excellent communication skills and the ability to explain technical concepts to diverse stakeholders. Role & responsibilities ---------------------------------------------------------------------------------------------------------------------------------------------- Joining: Immediate Work location: Bangalore (hybrid) , Chennai Open Positions: Senior Azure Data Engineer, If interested, please share your updated resume to carrers@dataceria.com: We welcome applications from skilled candidates who are open to working in a hybrid model. Candidates with less experience but strong technical abilities are also encouraged to apply. ----------------------------------------------------------------------------------------------------- Dataceria Software Solutions Pvt Ltd Follow our LinkedIn for more job openings : https://www.linkedin.com/company/dataceria/ Email : careers@dataceria.com

Posted 2 months ago

Apply

3.0 - 8.0 years

6 - 12 Lacs

Kolkata

Work from Office

Job Title: AI/ML Data Engineer Location: Kolkata, India Experience: 3+ Years Industry: IT / AI & Data Analytics Job Summary: We are hiring an experienced AI/ML Data Engineer to design and build scalable data pipelines and ETL processes to support analytics and machine learning projects. The ideal candidate will have strong Python and SQL skills, hands-on experience with tools like Apache Airflow , Kafka , and working knowledge of cloud platforms (AWS, GCP, or Azure) . A strong understanding of data transformation, feature engineering, and data automation is essential. Key Skills Required: ETL & Data Pipeline Development Python & SQL Programming Apache Airflow / Kafka / Spark / Hadoop Cloud Platforms: AWS / GCP / Azure Data Cleaning & Feature Engineering Strong Problem-Solving & Business Understanding Preferred Profile: Candidates with a B.Tech / M.Tech / MCA in Computer Science or Data Engineering, and 3+ years of hands-on experience in building data solutions, who can work closely with cross-functional teams and support AI/ML initiatives.

Posted 2 months ago

Apply

4.0 - 9.0 years

9 - 19 Lacs

Hyderabad, Bengaluru

Work from Office

Key Responsibilities - Python & PySpark: - Writing efficient ETL (Extract, Transform, Load) pipelines. - Implementing data transformations using PySpark DataFrames and RDDs. - Optimizing Spark jobs for performance and scalability. - Apache Spark: - Managing distributed data processing. - Implementing batch and streaming data processing. - Tuning Spark configurations for efficient resource utilization. - Unix Shell Scripting: - Automating data workflows and job scheduling. - Writing shell scripts for file management and log processing. - Managing cron jobs for scheduled tasks. - Google Cloud Platform (GCP) & BigQuery: - Designing data warehouse solutions using BigQuery. - Writing optimized SQL queries for analytics. - Integrating Spark with BigQuery for large-scale data processing

Posted 2 months ago

Apply

7.0 - 10.0 years

22 - 30 Lacs

Kolkata, Mumbai, Pune

Hybrid

Primary Skills: Azure/ AWS, ADB, Kafka, Java/ Python, ETL pipelines, Kubernetes, SQL Secondary Skills: Snowflake 79 Years Old Reputed MNC Company

Posted 2 months ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, Seeking a Cloud Monitoring Specialist to set up observability and real-time monitoring in cloud environments. Key Responsibilities: Configure logging and metrics collection. Set up alerts and dashboards using Grafana, Prometheus, etc. Optimize system visibility for performance and security. Required Skills & Qualifications: Familiar with ELK stack, Datadog, New Relic, or Cloud-native monitoring tools. Strong troubleshooting and root cause analysis skills. Knowledge of distributed systems. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 2 months ago

Apply

10.0 - 15.0 years

12 - 22 Lacs

New Delhi, Gurugram

Hybrid

Team Leadership & Management: Lead, mentor, and develop a team of data engineers. Foster a collaborative and innovative team environment. Conduct performance evaluations and support professional growth. Data Engineering & Architecture: Architect and implement scalable data solutions using Azure Databricks and Snowflake. Design, build, and maintain robust data pipelines with a solid understanding of ETL/ELT processes. Optimize data workflows for performance, reliability, and scalability. Solution Architecture: Architect comprehensive data solutions tailored to business needs. Lead the design and implementation of data warehouses, ensuring alignment with organizational objectives. Collaborate with stakeholders to define and refine data requirements and solutions. AI Integration: Work alongside data scientists and AI specialists to integrate machine learning models into data pipelines. Implement AI-driven solutions to enhance data processing and analytics capabilities. Engineering Project Management: Manage data engineering projects from inception to completion, ensuring timely delivery and adherence to project goals. Utilize project management methodologies to track progress, allocate resources, and mitigate risks. Coordinate with stakeholders to define project requirements and objectives. Infrastructure as Code & Automation: Implement and manage infrastructure using Terraform. Develop and maintain CI/CD pipelines to automate deployments and ensure continuous integration and delivery of data solutions. Quality Assurance & Best Practices: Establish and enforce data engineering best practices and standards. Ensure data quality, security, and compliance across all data initiatives. Conduct code reviews and ensure adherence to coding standards. Collaboration & Communication: Work closely with data analysts, business intelligence teams, and other stakeholders to understand data needs and deliver solutions. Communicate technical concepts and project statuses effectively to non-technical stakeholders. Undergraduate degree in Computer Science, Engineering, Information Technology, or a related field, or equivalent experience. Experience: 8+ years of overall experience in data engineering. 2+ years of experience in Managing Data Engineering Teams. Proven experience with Azure Databricks and Snowflake. Solid experience in designing data solutions for data warehouses. Hands-on experience with Terraform for infrastructure as code. Strong knowledge of CI/CD tools and practices. Experience integrating AI and machine learning models into data pipelines. Technical Skills: Proficiency in Spark, Scala, Python, SQL, and Databricks. Proven Unix scripting and SQL skills. Strong understanding of SQL and database management. Familiarity with data warehousing, ETL/ELT processes, and big data technologies. Experience with cloud platforms, preferably Microsoft Azure. Project Management: Proven ability to manage multiple projects simultaneously. Familiarity with project management tools (e.g., Jira, Trello, Asana,Rally). Strong organizational and time-management skills. Soft Skills: Excellent leadership and team management abilities. Ability to work collaboratively in a fast-paced environment. Proven ability to perform with minimal supervision. Solid work prioritization, planning, and organizational skills. Leadership qualities including being proactive, thoughtful, thorough, decisive, and flexible. Role & responsibilities Preferred candidate profile

Posted 2 months ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Looking for a DevOps Senior Engineer in the Data Engineering team who can help us support next-generation Analytics applications over Oracle cloud. This posting is for DevOps Senior Engineer in the Oracle Analytics Warehouse product development organization. Fully handled Cloud service that provides customers a turn-key enterprise warehouse on the cloud for Fusion Applications. The service is being built on a sophisticated technology stack demonstrating a brand-new data integration platform and the industry's most sophisticated analytical business analytics platform. https://www.oracle.com/solutions/business-analytics/analytics-for-applications.html We are looking for senior engineer with experience in supporting data warehousing products. As a member of the Product development organization, focus will be on working with development teams, providing timely support to customers and identify/implementing process automation, for cloud BI product. . BS or equivalent experience or higher degree in Computer Science / Engineering or equivalent from top university . validated experience, supporting business customers on any Cloud/On-premise BI Application . Experience in SQL/PL-SQL and excellent de-bugging skills . Experience in Diagnosing network latency and intermittent issues, Reading and analyzing log files . Good Functional Knowledge in ERP, Finance, HCM or EBS domain . Working experience with any ERP/in-demand application such as Oracle EBS, Fusion is helpful . Good programming skills in Python/Java . Exposure to cloud infrastructure, Oracle Cloud Infrastructure (OCI) is helpful . Experience in performance tuning SQL and understanding ETL pipelines . Build, Configure, Manage and Coordinate all Build and Release engineering activities . Strong logical/critical thinking and problem resolution skill . Excellent interpersonal skills Career Level - IC2 Roles and Responsibilities: . As member of Pipeline Production Operations, you will address customer issues and tickets within defined SLA's . Proactively identify and resolve potential problems in an effort to prevent them from occurring and improve the overall customer experiences . You will approach each case with a goal of ensuring Oracle Analytics products are performing at an efficient level by addressing any underlying or additional problems uncovered during each Customer engagement. . Co-ordinate and connect with different team members to formulate the solutions to customer issues . You will ensure full understanding of the issue, including impact to customer. You will recommend solutions to customers and follow through to resolution or escalate the case in a timely manner if no resolution can be found. . Bring together logs, configuration details and attempt to reproduce the reported issues. . Develop and improve Knowledge base for the issues and their solutions. Participate in knowledge sharing via involvement in technical discussions and Knowledge Base documentation. Prioritize workload based on severity and demonstrate a sense of urgency when handling cases. Find opportunities for process improvements and automation through building right utilities/tools Willing to be working in Shifts and weekends based on support rota.

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Pune

Hybrid

Sr Azure Data Engineer About Cloudaeon: Cloudaeon is a global technology consulting and services company. We support companies in managing cloud infrastructure and solutions with the help of big data, DevOps and analytics. We offer first-class solutions and services that use big data and always exceed customer expectations. Our deep vertical knowledge, combined with expertise in several enterprise- class big data platforms, helps develop targeted solutions to meet our customers' business needs. Our global team consists of experienced professionals with experience in various tech stacks. Every member of our team is very active and committed to helping our customers achieve their goals. Job Role: We are looking for a Senior Azure Data Engineer with overall 5+ years of experience to join our team. The ideal candidate should have expertise in Azure Data Factory (ADF), Databricks, SQL, Python, and experience working with SAP IS-Auto as a data source. This role involves data modeling, systematic layer modeling, and ETL/ELT pipeline development to enable efficient data processing and analytics. You will use various methods to transform raw data into useful data systems. Overall, you will strive for efficiency by aligning data systems with business goals. Responsibilities: Develop & Optimize ETL Pipelines: Build robust and scalable data pipelines using ADF, Databricks, and Python for data ingestion, transformation, and loading. Data Modeling & Systematic Layer Modeling: Design logical, physical, and systematic data models for structured and unstructured data. Integrate SAP IS-Auto: Extract, transform, and load data from SAP IS-Auto into Azure-based data platforms. Database Management: Develop and optimize SQL queries, stored procedures, and indexing strategies to enhance performance. Big Data Processing: Work with Azure Databricks for distributed computing, Spark for large-scale processing, and Delta Lake for optimized storage. Data Quality & Governance: Implement data validation, lineage tracking, and security measures for high-quality, compliant data. Collaboration: Work closely with business analysts, data scientists, and DevOps teams to ensure data availability and usability. Requirements: Azure Cloud Expertise: Strong experience in Azure Data Factory (ADF), Databricks, and Azure Synapse. Programming: Proficiency in Python for data processing, automation, and scripting. SQL & Database Skills: Advanced knowledge of SQL, T-SQL, or PL/SQL for data manipulation. SAP IS-Auto Data Handling: Experience integrating SAP IS-Auto as a data source into data pipelines. Data Modeling: Hands-on experience in dimensional modeling, systematic layer modeling, and entity-relationship modeling. Big Data Frameworks: Strong understanding of Apache Spark, Delta Lake, and distributed computing. Performance Optimization: Expertise in query optimization, indexing, and performance tuning. Data Governance & Security: Knowledge of RBAC, encryption, and data privacy standards. Strong problem-solving skills coupled with good communication skills. Open minded, inquisitive, life-long learner. Good conversion of high-level business & technical requirements into technical specs. Feeling comfortable in using Azure cloud technologies. Customer centric, passionate about delivering great digital products and services. Preferred Qualifications: Experience with CI/CD for data pipelines using Azure DevOps. Knowledge of Kafka/Event Hub for real-time data processing. Experience with Power BI/Tableau for data visualization (not mandatory but a plus).

Posted 2 months ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Gurugram

Hybrid

Hi, Wishes from GSN!!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is hiring Pyspark Developer for one of our leading MNC client. PFB the details for your better understanding: ~~~~ LOOKING FOR IMMEDIATE JOINERS ~~~~ WORK LOCATION: Gurugram Job Role: Pyspark Developer EXPERIENCE: 5 Yrs -10 Yrs CTC Range: 20LPA -28 LPA Work Type: HYBRID Only JD: Must be strong in Advanced SQL (e.g., joins and aggregations) Should have good experience in Pyspark (atleast 4 years) Good have knowledge in AWS services Experience across the data lifecycle Design & develop ETL pipeline using PySpark on AWS framework If interested, kindly APPLY for IMMEDIATE response. Thanks & Regards Sathya K GSN Consulting Mob: 8939666794 Mail ID: sathya@gsnhr.net; Web: https://g.co/kgs/UAsF9W

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies