Jobs
Interviews

453 Data Engineer Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 5.0 years

4 - 8 Lacs

Gurugram

Work from Office

Job Requirements Someone with 3-6 years of experience running medium to large scale production environments Proven programming/scripting skills in at least one of the language (i.e Python, Java, Scala, Javascript ) Experience with any one of the cloud-based services and infrastructure (AWS, GCP, Azure) Proficiency in writing analytical SQL queries. Experience in building analytical tools that utilize data pipelines to provide key actionable insights. Knowledge of big-data tools like Hadoop, Kafka, Spark etc would be a plus. A proactive approach to spotting problems, areas for improvement, and performance bottlenecks

Posted 3 months ago

Apply

1.0 - 5.0 years

3 - 7 Lacs

Chandigarh

Work from Office

Key Responsibilities Assist in building and maintaining data pipelines on GCP using services like BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc. Support data ingestion, transformation, and storage processes for structured and unstructured datasets. Participate in performance tuning and optimization of existing data workflows. Collaborate with data analysts, engineers, and stakeholders to ensure reliable data delivery. Document code, processes, and architecture for reproducibility and future reference. Debug issues in data pipelines and contribute to their resolution.

Posted 3 months ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Position Overview We are looking for an experienced Data Engineer to join our dynamic team If you are passionate about building scalable software solutions, have expertise in system design and data structures, and are familiar with various databases, we would love to hear from you, ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation, Job Description Act as the first point of contact for data issues in the Master Data Management (MDM) system, Investigate and resolve data-related issues, such as duplicate data or missing records, ensuring timely and accurate updates, Coordinate with the Product Manager, QA Lead, and Technology Lead to prioritize and address tickets effectively, Work on Data related issues, ensuring compliance with regulations, Build and optimize data models to ensure efficient storage and query performance, including work with Snowflake tables, Write complex SQL queries for data manipulation and retrieval, Collaborate with other teams to diagnose and fix more complex issues that may require code changes or system updates, Utilize AWS resources like CloudWatch, Lambda, SQS, and Kinesis Streams for data storage, transformation, and analysis, Update and maintain the knowledge base to document common issues and their solutions, Monitor system logs and alerts to proactively identify potential issues before they affect customers, Participate in team meetings to provide updates on ongoing issues and contribute to process improvements, Maintain documentation of data engineering processes, data models, and system configurations, Basic Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field, Minimum of 3 years of experience in data engineering, preferably related to MDM systems, Strong expertise in SQL and other database query languages, Hands-on experience with data warehousing solutions and relational database management systems (RDBMS), Proficiency in ETL tools and data pipeline construction, Familiarity with AWS services, Excellent programming skills, preferably in Python, Strong understanding of data privacy regulations like DSAR and CCPA, Good communication skills, both written and verbal, with the ability to articulate complex data concepts to non-technical stakeholders, Strong problem-solving skills and attention to detail, We are proud to offer a competitive salary alongside a strong healthcare insurance and benefits package The role is preferably hybrid, with 3 days per week spent in office We pride ourselves on the growth of our employees, offering extensive learning and development resources, PI267947624

Posted 3 months ago

Apply

1.0 - 5.0 years

3 - 7 Lacs

Gurugram

Work from Office

Key Responsibilities Assist in building and maintaining data pipelines on GCP using services like BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc. Support data ingestion, transformation, and storage processes for structured and unstructured datasets. Participate in performance tuning and optimization of existing data workflows. Collaborate with data analysts, engineers, and stakeholders to ensure reliable data delivery. Document code, processes, and architecture for reproducibility and future reference. Debug issues in data pipelines and contribute to their resolution.

Posted 3 months ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Kolkata, Hyderabad, Bengaluru

Hybrid

With a startup spirit and 115,000+ curious and courageous minds, we have the expertise to go deep with the worlds biggest brands—and we have fun doing it. We dream in digital, dare in reality, and reinvent the ways companies work to make an impact far bigger than just our bottom line. We’re harnessing the power of technology and humanity to create meaningful transformation that moves us forward in our pursuit of a world that works better for people. Now, we’re calling upon the thinkers and doers, those with a natural curiosity and a hunger to keep learning, keep growing., People who thrive on fearlessly experimenting, seizing opportunities, and pushing boundaries to turn our vision into reality. And as you help us create a better world, we will help you build your own intellectual firepower. Welcome to the relentless pursuit of better. Inviting applications for the role of Lead Consultant, AWS DataLake! Responsibilities • Having knowledge on DataLake on AWS services with exposure to creating External Tables and spark programming. The person shall be able to work on python programming. • Writing effective and scalable Python codes for automations, data wrangling and ETL. • Designing and implementing robust applications and work on Automations using python codes. • Debugging applications to ensure low-latency and high-availability. • Writing optimized custom SQL queries • Experienced in team and client handling • Having prowess in documentation related to systems, design, and delivery. • Integrate user-facing elements into applications • Having the knowledge of External Tables, Data Lake concepts. • Able to do task allocation, collaborate on status exchanges and getting things to successful closure. • Implement security and data protection solutions • Must be capable of writing SQL queries for validating dashboard outputs • Must be able to translate visual requirements into detailed technical specifications • Well versed in handling Excel, CSV, text, json other unstructured file formats using python. • Expertise in at least one popular Python framework (like Django, Flask or Pyramid) • Good understanding and exposure on any Git, Bamboo, Confluence and Jira. • Good in Dataframes and SQL ANSI using pandas. • Team player, collaborative approach and excellent communication skills Qualifications we seek in you! Minimum Qualifications •BE/B Tech/ MCA •Excellent written and verbal communication skills •Good knowledge of Python, Pyspark Preferred Qualifications/ Skills Strong ETL knowledge on any ETL tool – good to have. Good to have knowledge on AWS cloud and Snowflake. Having knowledge of PySpark is a plus. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 3 months ago

Apply

3.0 - 8.0 years

20 - 30 Lacs

Chennai

Hybrid

Job Title: Senior Data Engineer Data Products Location: Chennai, India Open Roles: 2 Mode: Hybrid About the Role Are you a hands-on data engineer who thrives on solving complex data challenges and building modern cloud-native solutions? We're looking for two experienced Senior Data Engineers to join our growing Data Engineering team. This is an exciting opportunity to work on cutting-edge data platform initiatives that power advanced analytics, AI solutions, and digital transformation across a global enterprise. In this role, you'll help design and build reusable, scalable, and secure data pipelines on a multi-cloud infrastructure, while collaborating with cross-functional teams in a highly agile environment. What You’ll Do Design and build robust data pipelines and ETL frameworks using modern tools and cloud platforms. Implement lakehouse architecture (Bronze/Silver/Gold layers) and support data product publishing via Unity Catalog. Work with structured and unstructured enterprise data including ERP, CRM, and product data systems. Optimize pipeline performance, reliability, and security across AWS and Azure environments. Automate infrastructure using IaC tools like Terraform and AWS CDK. Collaborate closely with data scientists, analysts, and platform teams to deliver actionable data products. Participate in agile ceremonies, conduct code reviews, and contribute to team knowledge sharing. Ensure compliance with data privacy, cybersecurity, and governance policies. What You Bring 3+ years of hands-on experience in data engineering roles. Strong command of SQL and Python ; experience with Scala is a plus. Proficiency in cloud platforms (AWS, Azure), Databricks , DBT , Airflow , and version control tools like GitLab . Hands-on experience implementing lakehouse architectures and multi-hop data flows using Delta Lake . Background in working with enterprise data systems like SAP, Salesforce, and other business-critical platforms. Familiarity with DevOps , DataOps , and agile delivery methods (Jira, Confluence). Strong understanding of data security , privacy compliance , and production-grade pipeline management. Excellent communication skills and ability to work in global, multicultural teams. Why Join Us? Opportunity to work with modern data technologies in a complex, enterprise-scale environment. Be part of a collaborative, forward-thinking team that values innovation and continuous learning. Hybrid work model that offers both flexibility and team engagement . A role where you can make a real impact by contributing to digital transformation and data-driven decision-making.

Posted 3 months ago

Apply

8.0 - 13.0 years

22 - 25 Lacs

Hyderabad

Work from Office

Role & responsibilities Design, build and maintain complex ELT jobs that deliver business value • Translate high-level business requirements into technical specs • Ingest data from disparate sources into the data lake and data warehouse • Cleanse and enrich data and apply adequate data quality controls • Develop re-usable tools to help streamline the delivery of new projects • Collaborate closely with other developers and provide mentorship • Evaluate and recommend tools, technologies, processes and reference architectures • Work in an Agile development environment, attending daily stand-up meetings and delivering incremental improvements Basic Qualifications • Bachelors degree in computer science, engineering or a related field • Data: 5+ years of experience with data analytics and warehousing • SQL: Deep knowledge of SQL and query optimization • ELT: Good understanding of ELT methodologies and tools • Troubleshooting: Experience with troubleshooting and root cause analysis to determine and remediate potential issues • Communication: Excellent communication, problem solving and organizational and analytical skills • Able to work independently and to provide leadership to small teams of developers Preferred Qualifications • Master’s degree in computer science or engineering or a related field • Cloud: Experience working in a cloud environment (e.g. AWS) • Python: Hands on experience developing with Python • Advanced Data Processing: Experience using data processing technologies such as Apache Spark or Kafka • Workflow: Good knowledge of orchestration and scheduling tools (e.g. Apache Airflow) • Reporting: Experience with data reporting (e.g. Microstrategy, Tableau, Looker) and data cataloging tools (e.g. Alation) Preferred candidate profile

Posted 3 months ago

Apply

4.0 - 7.0 years

8 - 15 Lacs

Hyderabad

Hybrid

We are seeking a highly motivated Senior Data Engineer OR Data Engineer within Envoy Global's tech team to join us on a full time, permanent basis. This role is responsible for designing, developing, and documenting data pipelines and ETL jobs to enable data migration, data integration and data warehousing. That includes ETL jobs, reports, dashboards and data pipelines. The person in this role will work closely with Data Architect, BI & Analytics team and Engineering teams to deliver data assets for Data Security, DW and Analytics. As our Senior Data Engineer OR Data Engineer, you will be required to: Design, build, test and maintain cloud-based data pipelines to acquire, profile, cleanse, consolidate, transform, integrate data Design and develop ETL processes for the Data Warehouse lifecycle (staging of data, ODS data integration, EDW and data marts) and Data Security (Data archival, Data obfuscation, etc.). Build complex SQL queries on large datasets and performance tune as needed Design and develop data pipelines and ETL jobs using SSIS and Azure Data Factory Maintain ETL packages and supporting data objects for our growing BI infrastructure Carry out monitoring, tuning, and database performance analysis Facilitate integration of our application with other systems by developing data pipelines Prepare key documentation to support the technical design in technical specifications Collaborate and work alongside with other technical professionals (BI Report developers, Data Analysts, Architect) Communicate clearly and effectively with stakeholders To apply for this role, you should possess the following skills, experience and qualifications: Design, Develop, and Document Data Pipelines and ETL Jobs: Create and maintain robust data pipelines and ETL (Extract, Transform, Load) processes to support data migration, integration, and warehousing. Data Assets Delivery: Collaborate with Data Architects, BI & Analytics teams, and Engineering teams to deliver high-quality data assets for data security, data warehousing (DW), and analytics. ETL Jobs, Reports, Dashboards, and Data Pipelines: Develop and manage ETL jobs, generate reports, create dashboards, and ensure the smooth operation of data pipelines. 3+ years of experience as a SSIS ETL developer, Data Engineer or a related role 2+ years of experience using Azure Data Factory Knowledgeable in Data Modelling and Data warehouse concepts Experience working with Azure stack Demonstrated ability to write SQL/TSQL queries to retrieve/modify data Knowledge and know-how to troubleshoot potential issues, and experience with best practices around database operations Ability to work in an Agile environment Should you have a deep passion for technology and a desire to thrive in a rapidly evolving and creative environment, we would be delighted to receive your application. Please provide your updated resume, highlighting your relevant experience and the reasons you believe you would be a valuable member of our team. We look forward to reviewing your subm

Posted 3 months ago

Apply

1.0 - 5.0 years

7 - 15 Lacs

Pune

Work from Office

Hi All, Please find the mandatory skills based on the job description for Data Engineer (Grade I & J) , here are the mandatory (essential) skills required: Mandatory Skills Data Infrastructure & Engineering Designing, building, productionizing, and maintaining scalable and reliable data infrastructure and data products. Experience with data modeling, pipeline idempotency, and operational observability. Programming Languages Proficiency in one or more object-oriented programming languages such as: Python Scala Java C# Database Technologies Strong experience with: SQL and NoSQL databases Query structures and design best practices Scalability, readability, and reliability in database design Distributed Systems Experience implementing large-scale distributed systems in collaboration with senior team members. Software Engineering Best Practices Technical design and reviews Unit testing, monitoring, and alerting Code versioning, code reviews, and documentation CI/CD pipeline development and maintenance Security & Compliance Deploying secure and well-tested software and data assets Meeting privacy and compliance requirements Site Reliability Engineering Service reliability, on-call rotations, defining and maintaining SLAs Infrastructure as code and containerized deployments Communication & Collaboration Strong verbal and written communication skills Ability to work in cross-disciplinary teams Mindset & Education Continuous learning and improvement mindset BS degree in Computer Science or related field (or equivalent experience) Thanks & Regards Sushma Patil HR Cordinator sushma.patil@in.experis.com

Posted 3 months ago

Apply

6.0 - 10.0 years

16 - 31 Lacs

Bhubaneswar, Pune, Bengaluru

Work from Office

About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title : Data Engineer Experience : 6 to 10 years Key Responsibilities : Design, develop, and maintain large-scale batch and real-time data processing systems using PySpark and Scala. Build and manage streaming pipelines using Apache Kafka. Work with structured and semi-structured data sources including MongoDB, flat files, APIs, and relational databases. Optimize and scale data pipelines to handle large volumes of data efficiently. Implement data quality, data governance, and monitoring frameworks. Collaborate with data scientists, analysts, and other engineers to support various data initiatives. Develop and maintain robust, reusable, and well-documented data engineering solutions. Troubleshoot production issues, identify root causes, and implement fixes. Stay up to date with emerging technologies in the big data and streaming space Technical Skills : 6 to 10 years of experience in Data Engineering or a similar role. Strong hands-on experience with Apache Spark (PySpark) and Scala . Proficiency in designing and managing Kafka streaming architectures. Experience with MongoDB , including indexing, aggregation, and schema design. Solid understanding of distributed computing , ETL/ELT processes , and data warehousing concepts . Experience with cloud platforms (AWS, Azure, or GCP) is a strong plus. Strong programming and scripting skills (Python, Scala, or Java). Familiarity with workflow management tools like Airflow , Luigi , or similar is a plus. Excellent problem-solving skills and ability to work independently or within a team. Strong communication skills and the ability to collaborate effectively across teams. Notice period : 30,45,60,90 days Location : Bhubaneswar ,Bangalore, Pune Mode of Work :WFO(Work From Office) Thanks & Regards, SWETHA Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,INDIA. Contact Number:8067432433 rathy@blackwhite.in |www.blackwhite.in

Posted 3 months ago

Apply

2.0 - 5.0 years

2 - 4 Lacs

Mumbai, Mumbai Suburban, Mumbai (All Areas)

Work from Office

Role & responsibilities 3 to 4+ years of hands-on experience in SQL database design, data architecture, ETL, Data Warehousing, Data Mart, Data Lake, Big Data, Cloud and Data Governance domains. • Take ownership of the technical aspects of implementing data pipeline & migration requirements, ensuring that the platform is being used to its fullest potential through designing and building applications around business stakeholder needs. • Interface directly with stakeholders to gather requirements and own the automated end-to-end data engineering solutions. • Implement data pipelines to automate the ingestion, transformation, and augmentation of both structured, unstructured, real-time data, and provide best practices for pipeline operations • Troubleshoot and remediate data quality issues raised by pipeline alerts or downstream consumers. Implement Data Governance best practices. • Create and maintain clear documentation on data models/schemas as well as transformation/validation rules. • Implement tools that help data consumers to extract, analyze, and visualize data faster through data pipelines. • Implement data security, privacy, and compliance protocols to ensure safe data handling in line with regulatory requirements. • Optimize data workflows and queries to ensure low latency, high throughput, and cost efficiency. • Leading the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation for batch ETL's. • Work directly with our internal product/technical teams to ensure that our technology infrastructure is seamlessly and effectively integrated • Migrate current data applications & pipelines to Cloud leveraging technologies in future Preferred candidate profile • Graduate with Engineering Degree (CS/Electronics/IT) / MCA / MCS or equivalent with substantial data engineering experience. • 3+ years of recent hands-on experience with a modern programming language (Scala, Python, Java) is required; Spark/ Pyspark is preferred. • Experience with configuration management and version control apps (ie: Git) and experience working within a CI/CD framework is a plus. • 3+ years of recent hands-on SQL programming experience in a Big Data environment is required. • Working knowledge of PostgreSQL, RDBMS, NoSQL and columnar databases. • Experience developing and maintaining ETL applications and data pipelines using big data technologies is required; Apache Kafka, Spark, Airflow experience is a must. • Knowledge of API and microservice integration with applications. • Experience with containerization (e.g., Docker) and orchestration (e.g., Kubernetes). • Experience building data solutions for Power BI and Web visualization applications. • Experience with Cloud is a plus. • Experience in managing multiple projects and stakeholders with excellent communication and interpersonal skills. • Ability to develop and organize high-quality documentation. • Superior analytical skills and a strong sense of ownership in your work. • Collaborate with data scientists on several projects. Contribute to development and support of analytics including AI/ML. • Ability to thrive in a fast-paced environment, and to manage multiple, competing priorities simultaneously. • Prior Energy & Utilities industry experience is a big plus. Experience (Min. Max. in yrs.): 3+ years of core/relevant experience Location: Mumbai (Onsite)

Posted 3 months ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Chennai

Work from Office

Development: Design, build, and maintain robust, scalable, and high-performance data pipelines to ingest, process, and store large volumes of structured and unstructured data. Utilize Apache Spark within Databricks to process big data efficiently, leveraging distributed computing to process large datasets in parallel. Integrate data from a variety of internal and external sources, including databases, APIs, cloud storage, and real-time streaming data. Data Integration & Storage: Implement and maintain data lakes and warehouses, using technologies like Databricks, Azure Synapse, Redshift, BigQuery to store and retrieve data. Design and implement data models, schemas, and architecture for efficient querying and storage. Data Transformation & Optimization: Leverage Databricks and Apache Spark to perform data transformations at scale, ensuring data is cleaned, transformed, and optimized for analytics. Write and optimize Spark SQL, PySpark, and Scala code to process large datasets in real-time and batch jobs. Work on ETL processes to extract, transform, and load data from various sources into cloud-based data environments. Big Data Tools & Technologies: Utilize cloud-based big data platforms (e.g., AWS, Azure, Google Cloud) in conjunction with Databricks for distributed data processing and storage. Implement and maintain data pipelines using Apache Kafka, Apache Flink, and other data streaming technologies for real-time data processing. Collaboration & Stakeholder Engagement: Work with data scientists, data analysts, and business stakeholders to define data requirements and deliver solutions that align with business objectives. Collaborate with cloud engineers, data architects, and other teams to ensure smooth integration and data flow between systems. Monitoring & Automation: Build and implement monitoring solutions for data pipelines, ensuring consistent performance, identifying issues, and optimizing workflows. Automate data ingestion, transformation, and validation processes to reduce manual intervention and increase efficiency. Document data pipeline processes, architectures, and data models to ensure clarity and maintainability. Adhere to best practices in data engineering, software development, version control, and code review. Required Skills & Qualifications: Education: Bachelors degree in Computer Science, Engineering, Data Science, or a related field (or equivalent experience). Technical Skills: Apache Spark: Strong hands-on experience with Spark, specifically within Databricks (PySpark, Scala, Spark SQL). Experience working with cloud-based platforms such as AWS, Azure, or Google Cloud, particularly in the context of big data processing and storage. Proficiency in SQL and experience with cloud data warehouses (e.g., Redshift, BigQuery, Snowflake). Strong programming skills in Python, Scala, or Java. Big Data & Cloud Technologies: Experience with distributed computing concepts and scalable data processing architectures. Familiarity with data lake architectures and frameworks (e.g., AWS S3, Azure Data Lake). Data Engineering Concepts: Strong understanding of ETL processes, data modeling, and database design. Experience with batch and real-time data processing techniques. Familiarity with data quality, data governance, and privacy regulations. Problem Solving & Analytical Skills: Strong troubleshooting skills for resolving issues in data pipelines and performance optimization. Ability to work with large, complex datasets, and perform data wrangling and cleaning.

Posted 3 months ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Ahmedabad

Work from Office

Role & responsibilities Senior Data Engineer Job Description GRUBBRR is seeking a mid/senior-level data engineer to help build our next-generation analytical and big data solutions. We strive to build Cloud-native, consumer-first, UX-friendly kiosks and online applications across a variety of verticals supporting enterprise clients and small businesses. Behind our consumer applications, we integrate and interact with a deep-stack of payment, loyalty, and POS systems. In addition, we also provide actionable insights to enable our customers to make informed decisions. Our challenge and goal is to provide a frictionless experience for our end-consumers and easy-to-use, smart management capabilities for our customers to maximize their ROIs. Responsibilities: Develop and maintain data pipelines Ensure data quality and accuracy Design, develop and maintain large, complex sets of data that meet non-functional and functional business requirements Build required infrastructure for optimal extraction, transformation and loading of data from various data sources using cloud technologies Build analytical tools to utilize the data pipelines Skills: Solid experience with SQL & NoSQL Strong Data modeling skills for data lake, data warehouse, data marts including dimensional modeling and star schemas Proficient with Azure Data Factory data integration technology Knowledge of Hadoop or similar Big Data technology Knowledge of Apache Kafka, Spark, Hive or equivalent Knowledge of Azure or AWS analytics technologies Qualifications: BS in Computer Science, Applied Mathematics or related fields (MS preferred) At least 8 years of experience working with OLAPs Microsoft Azure or AWS Data engineer certification a plus

Posted 3 months ago

Apply

5.0 - 9.0 years

15 - 30 Lacs

Hyderabad

Hybrid

Hi! Greetings of the day!! We have openings for one of our product based company. Location : Hyderabad Notice Period: Only Immediate - 30 Days Work Mode - Hybrid Key Purpose Statement Core mission The core purpose of a Senior Data Engineer will play a key role in designing, building, and optimizing our data infrastructure and pipelines. This individual will leverage their deep expertise in Azure Synapse , Databricks cloud platforms, and Python programming to deliver high-quality data solutions. RESPONSIBILITIES Data Infrastructure and Pipeline Development: - Develop and maintain complex ETL/ELT pipelines using Databricks and Azure Synapse. - Optimize data pipelines for performance, scalability, and cost-efficiency. - Implement best practices for data governance, quality, and security. Cloud Platform Management: - Design and manage cloud-based data infrastructure on platforms such as Azure - Utilize cloud-native tools and services to enhance data processing and storage capabilities. - understanding and designing CI/CD pipelines for data engineering projects. Programming: - Develop and maintain high-quality, reusable Code on Databricks, and Synapse environment for data processing and automation. - Collaborate with data scientists and analysts to design solutions into data workflows. - Conduct code reviews and mentor junior engineers in Python , PySpark & SQL environments best practices. If interested, please share resume to aparna.ch@v3staffing.in

Posted 3 months ago

Apply

9.0 - 13.0 years

25 - 35 Lacs

Hyderabad

Hybrid

Senior Data Engineer You are familiar with AWS and Azure Cloud. You have extensive knowledge of Snowflake , SnowPro Core certification is a must have. You have used DBT at least in one project to deploy models in production. You have configured and deployed Airflow and integrated various operator in airflow (especially DBT & Snowflake). You can design build, release pipelines and understand of Azure DevOps Ecosystem. You have excellent understanding of Python (especially PySpark) and able to write metadata driven programs. Familiar with Data Vault (Raw , Business) also concepts like Point In Time , Semantic Layer. You are resilient in ambiguous situations and can clearly articulate the problem in a business friendly way. You believe in documenting processes and managing the artifacts and evolving that over time.

Posted 3 months ago

Apply

8.0 - 13.0 years

16 - 27 Lacs

Indore, Hyderabad, Ahmedabad

Work from Office

Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth. Designation: Lead Data Engineer Location: Hyderabad, Indore, Ahmedabad Experience: 8 years Role & responsibilities What You Will Do: • Analyze Business Requirements. • Analyze the Data Model and do GAP analysis with Business Requirements and Power BI. Design and Model Power BI schema. • Transformation of Data in Power BI/SQL/ETL Tool. • Create DAX Formula, Reports, and Dashboards. Able to write DAX formulas. • Experience writing SQL Queries and stored procedures. • Design effective Power BI solutions based on business requirements. • Manage a team of Power BI developers and guide their work. • Integrate data from various sources into Power BI for analysis. • Optimize performance of reports and dashboards for smooth usage. • Collaborate with stakeholders to align Power BI projects with goals. • Knowledge of Data Warehousing(must), Data Engineering is a plus What we need? • B. Tech computer science or equivalent • Minimum 5+ years of relevant experience Perks and benefits

Posted 3 months ago

Apply

4.0 - 7.0 years

5 - 14 Lacs

Mumbai, Navi Mumbai, Mumbai (All Areas)

Work from Office

We are looking for an experienced Data Engineer to design, develop, and maintain our data pipelines, primarily focused on ingesting data into our Snowflake data platform. The ideal candidate will have strong expertise in Snowflake and practical experience with AWS services, particularly using S3 as a landing zone and an entry point to the Snowflake environment. You will be responsible for building efficient, reliable, and scalable data pipelines that are critical for our data-driven decision-making processes. Role & responsibilities 1. Design, develop, implement, and maintain scalable and robust data pipelines to ingest data from various sources into the Snowflake data platform. 2. Utilize AWS S3 as a primary landing zone for data, ensuring efficient data transfer and integration with Snowflake. 3. Develop and manage ETL/ELT processes, focusing on data transformation, cleansing, and loading within the Snowflake and AWS ecosystem. 4.Write complex SQL queries and stored procedures in Snowflake for data manipulation, transformation, and performance optimization. 5. Monitor, troubleshoot, and optimize data pipelines for performance, reliability, and scalability. 6. Collaborate with data architects, data analysts, data scientists, and business stakeholders to understand data requirements and deliver effective solutions. 7. Ensure data quality, integrity, and governance across all data pipelines and within the Snowflake platform. 8. Implement data security best practices in AWS and Snowflake. 9. Develop and maintain comprehensive documentation for data pipelines, processes, and architectures. 10. Stay up-to-date with emerging technologies and best practices in data engineering, particularly related to Snowflake and AWS. 11. Participate in Agile/Scrum development processes, including sprint planning, daily stand-ups, and retrospectives. Preferred candidate profile 1. Strong, hands-on proficiency with Snowflake: In-depth knowledge of Snowflake architecture, features (e.g., Snowpipe, Tasks, Streams, Time Travel, Zero-Copy Cloning). Experience in designing and implementing Snowflake data models (schemas, tables, views). Expertise in writing and optimizing complex SQL queries in Snowflake. Experience with data loading and unloading techniques in Snowflake. 2. Solid experience with AWS Cloud services: Proficiency in using AWS S3 for data storage, staging, and as a landing zone for Snowflake. Experience with other relevant AWS services (e.g., IAM for security, Lambda for serverless processing, Glue for ETL - if applicable). 3. Strong experience in designing and building ETL/ELT data pipelines. Proficiency in at least one programming language commonly used in data engineering (e.g., Python, Scala, Java). Python is highly preferred.

Posted 3 months ago

Apply

7.0 - 11.0 years

20 - 35 Lacs

Gandhinagar, Ahmedabad

Hybrid

Job Title: Senior Data Engineer Experience: 8 to 10 Years Location: Ahmedabad & Gandhinagar Employment Type: Full-time Our client is a leading provider of advanced solutions for capital markets, specializing in cutting-edge trading infrastructure and software. With a global presence and a strong focus on innovation, the company empowers professional traders, brokers, and financial institutions to execute high-speed, high-performance trading strategies across multiple asset classes. Their technology is known for its reliability, low latency, and scalability, making it a preferred choice for firms seeking a competitive edge in dynamic financial environments. Role & responsibilities Design, develop, and maintain scalable and reliable data pipelines using DBT and Airflow . Work extensively with Snowflake to optimize data storage, transformation, and access. Develop and maintain efficient ETL/ELT processes in Python to support analytical and operational workloads. Ensure high standards of data quality, consistency, and security across systems. Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions. Monitor and troubleshoot data pipelines, resolving issues proactively. Optimize performance of existing data workflows and recommend improvements. Document data engineering processes and solutions effectively. Preferred candidate profile Bachelors or Masters degree in Computer Science, Engineering, or related field 8 - 10 years of experience in data engineering or related roles Strong knowledge of SQL and data warehousing principles Familiarity with version control (e.g., Git) and CI/CD practices Excellent problem-solving skills and attention to detail Strong communication and collaboration abilities Preferred Skills Experience in cloud platforms like AWS, GCP, or Azure Exposure to data governance and security best practices Knowledge of modern data architecture and real-time processing frameworks Competitive Benefits Offered By Our Client: Relocation Support: Our client offers an additional relocation allowance to assist with moving expenses. Comprehensive Health Benefits: Including medical, dental, and vision coverage. Flexible Work Schedule: Hybrid work model with an expectation of just 2 days on-site per week. Generous Paid Time Off (PTO): 21 days per year, with the ability to roll over 1 day into the following year. Additionally, 1 day per year is allocated for volunteering, 2 training days per year for uninterrupted professional development, and 1 extra PTO day during milestone years. Paid Holidays & Early Dismissals: A robust paid holiday schedule with early dismissal on select days, plus generous parental leave for all genders, including adoptive parents. Tech Resources: A rent-to-own program offering employees a company-provided Mac/PC laptop and/or mobile phone of their choice, along with a tech accessories budget for monitors, headphones, keyboards, and other office equipment. Health & Wellness Subsidies: Contributions toward gym memberships and health/wellness initiatives to support your well-being. Milestone Anniversary Bonuses: Special bonuses to celebrate key career milestones. Inclusive & Collaborative Culture: A forward-thinking, culture-based organisation that values diversity and inclusion and fosters collaborative teams.

Posted 3 months ago

Apply

8.0 - 10.0 years

15 - 20 Lacs

Pune

Work from Office

Education: Bachelors or masters degree in computer science, Information Technology, Engineering, or a related field. Experience: 8-10 years 8+ years of experience in data engineering or a related field. Strong hands-on experience with Azure Databricks , Spark , Python/Scala, CICD, Scripting for data processing. Experience working in multiple file formats like Parquet , Delta , and Iceberg . Knowledge of Kafka or similar streaming technologies for real-time data ingestion. Experience with data governance and data security in Azure. Proven track record of building large-scale data ingestion and ETL pipelines in cloud environments, specifically Azure. Deep understanding of Azure Data Services (e.g., Azure Blob Storage, Azure Data Lake, Azure SQL Data Warehouse, Event Hubs, Functions etc.). Familiarity with data lakes , data warehouses , and modern data architectures. Experience with CI/CD pipelines , version control (Git), Jenkins and agile methodologies. Understanding of cloud infrastructure and architecture principles (especially within Azure ). Technical Skills: Expert-level proficiency in Spark, SPARK Streaming , including optimization, debugging, and troubleshooting Spark jobs. Solid knowledge of Azure Databricks for scalable, distributed data processing. Strong coding skills in Python and Scala for data processing. Experience working with SQL , especially for large datasets. Knowledge of data formats like Iceberg , Parquet , ORC , and Delta Lake . Leadership Skills: Proven ability to lead and mentor a team of data engineers, ensuring adherence to best practices. Excellent communication skills, capable of interacting with both technical and non-technical stakeholders. Strong problem-solving, analytical, and troubleshooting abilities.

Posted 3 months ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Gurugram

Hybrid

Hi, Wishes from GSN!!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is hiring Pyspark Developer for one of our leading MNC client. PFB the details for your better understanding: ~~~~ LOOKING FOR IMMEDIATE JOINERS ~~~~ WORK LOCATION: Gurugram Job Role: Pyspark Developer EXPERIENCE: 5 Yrs -10 Yrs CTC Range: 20LPA -28 LPA Work Type: HYBRID Only JD: Must be strong in Advanced SQL (e.g., joins and aggregations) Should have good experience in Pyspark (atleast 4 years) Good have knowledge in AWS services Experience across the data lifecycle Design & develop ETL pipeline using PySpark on AWS framework If interested, kindly APPLY for IMMEDIATE response. Thanks & Regards Sathya K GSN Consulting Mob: 8939666794 Mail ID: sathya@gsnhr.net; Web: https://g.co/kgs/UAsF9W

Posted 3 months ago

Apply

8.0 - 12.0 years

15 - 27 Lacs

Mumbai, Pune, Bengaluru

Work from Office

Role & responsibilities : Job Description: Primarily looking for a Data Engineer (AWS) with expertise in processing data pipelines using Data bricks, PySpark SQL on Cloud distributions like AWS Must have AWS Data bricks ,Good-to-have PySpark, Snowflake, Talend Requirements- • Candidate must be experienced working in projects involving • Other ideal qualifications include experiences in • Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. • Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python • Familiarity with AWS compute storage and IAM concepts • Experience in working with S3 Data Lake as the storage tier • Any ETL background Talend AWS Glue etc. is a plus but not required • Cloud Warehouse experience Snowflake etc. is a huge plus • Carefully evaluates alternative risks and solutions before taking action. • Optimizes the use of all available resources • Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit • Skills • Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. • Experience on Shell scripting • Exceptionally strong analytical and problem-solving skills • Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses • Strong experience with relational databases and data access methods especially SQL • Excellent collaboration and cross functional leadership skills • Excellent communication skills both written and verbal • Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment • Ability to leverage data assets to respond to complex questions that require timely answers • has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills: Apache Spark, Databricks, Java, Python, Scala, Spark SQL. Note : Need only Immediate joiners/ Serving notice period. Interested candidates can apply. Regards, HR Manager

Posted 3 months ago

Apply

5.0 - 10.0 years

8 - 18 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role: Celonis Data Engineer Skills: Celonis, Celonis EMS, Data Engineer, SQL, PQL, ETL, OCPM Notice Period: 30-45 Days Role & responsibilities : Hands-on experience with Celonis EMS (Execution Management System). Strong SQL skills for data extraction, transformation, and modeling. Proficiency in PQL (Process Query Language) for custom process analytics. Experience integrating Celonis with SAP, Oracle, Salesforce, or other ERP/CRM systems. Knowledge of ETL, data pipelines, and APIs (REST/SOAP). Process Mining & Analytical Skills: Understanding of business process modeling and process optimization techniques. At least one OCPM project experience Ability to analyze event logs and identify bottlenecks, inefficiencies, and automation opportunities. 6-10 years of experience in IT industry with Data Architecture / Business Process out of which 3-4 Years of Experience in process mining, data analytics, or business intelligence. Celonis certification (e.g., Celonis Data Engineer, Business Analyst, or Solution Consultant) is a plus. OCPM experience is a Plus

Posted 3 months ago

Apply

13.0 - 20.0 years

30 - 45 Lacs

Bengaluru

Hybrid

Job Descirption: Java/Big Data/SQL/Architect Good understanding of Java/J2EE based scalable application development. Good understanding of Data Engineering with hands in experience in dealing with Data Transfer and Data Pipeline development. Exposure to building enterprise products / tools that improves developer productivity. Passionate about Gen AI / Impact creation with hands on experience.

Posted 3 months ago

Apply

13.0 - 15.0 years

37 - 40 Lacs

Noida, Gurugram, Delhi / NCR

Work from Office

Role & responsibilities REQUIREMENTS: Total experience 13+years. Proficient in architecting, designing, and implementing data platforms and data applications Strong experience in AWS Glue and Azure Data Factory. Hands-on experience with Databricks. Experience working with Big Data applications and distributed processing systems Working experience in build and maintain ETL/ELT pipelines using modern data engineering tools and frameworks Lead the architecture and implementation of data lakes, data warehouses, and real-time streaming solution Collaborate with stakeholders to understand business requirements and translate them into technical solutions Participate and contribute to RFPs, workshops, PoCs, and technical solutioning discussions Ensure scalability, reliability, and performance of data platforms Strong communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding the clients business use cases and technical requirements and be able to convert them in to technical design which elegantly meets the requirements Mapping decisions with requirements and be able to translate the same to developers Identifying different solutions and being able to narrow down the best option that meets the client’s requirements Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements Preferred candidate profile

Posted 3 months ago

Apply

5.0 - 10.0 years

12 - 22 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Job Title: ======= Microsoft ETL Developer - Microsoft SSIS / Informatica x4 positions Onsite Location: ============= Dubai, UAE Doha , Qatar Riyadh, Saudi Onsite Monthly Salary: ============== 10k AED - 15k AED Offshore Location: =============== Pune / Hyderabad / Chennai / Bangalore / Mumbai Offshore Annual Salary: ============== 12 LPA - 20 LPA Note: ===== You need to travel to onsite (UAE) on needful basis Project duration: ============= 2 Years Initially Desired Experience Level Needed: =========================== 5 - 10 Years Qualification: ========== B.Tech / M.Tech / MCA / M.Sc or equivalent Experience Needed: =============== Over all: 5 or more Years of total IT experience Solid 3+ Years experience or more - as ETL Developer with Microsoft SSIS / Informatica as ETL Developer Engineer Job Responsibilities: ================ - Design and develop ETL data flows - Design Microsoft ETL packages - Able to code T-SQL - Able to create Orchestrations - Able to design batch jobs / Orchestrations runs - Familiarity with data models - Able to develop MDM (Master Data Management) and design SCD-1/2/3 as per client requirements Experience: ================ - Experience as ETL Developer with Microsoft SSIS - Exposure and experience with Azure services including Azure Data Factory - Sound knowledge of BI practices and visualization tools such as PowerBI / SSRS/ QlikView - Collecting / gathering data from various multiple source systems - Load the data using ETL - Creating automated data pipelines - Configuring Azure resources and services Skills: ================ - Microsoft SSIS - Informatica - Azure Data Factory - Spark - SQL Nice to have: ========== - Any on site experience is added advantage, but not mandatory - Microsoft certifications are added advantage Business Vertical: ============== - Banking / Investment Banking - Capital Markets - Securities / Stock Market Trading - Bonds / Forex Trading - Credit Risk - Payments Cards Industry (VISA/ Master Card/ Amex) Job Code: ====== ETL_DEVP_0525 No.of positions: ============ 04 Email: ===== spectrumconsulting1977@gmail.com if you are interested, please email your CV as ATTACHMENT with job ref. code [ ETL_DEVP_0525 ] as subject

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies