Jobs
Interviews

271 Data Engineer Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Job Title: Data Engineer (Java + Hadoop/Spark) Location: Bangalore WFO Type: Full Time Experience: 8-12 years Notice Period Immediate Joiners to 30 Days Virtual drive on 1st June '25 Job Description: We are looking for a skilled Data Engineer with strong expertise in Java and hands-on experience with Hadoop or Spark. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and processing systems. Key Responsibilities: • Develop and maintain data pipelines using Java. • Work with big data technologies such as Hadoop or Spark to process large datasets. • Optimize data workflows and ensure high performance and reliability. • Collaborate with data scientists, analysts, and other engineers on data-related initiatives. Requirements: • Strong programming skills in Java. • Hands-on experience with Hadoop or Spark. • Experience with data ingestion, transformation, and storage solutions. • Familiarity with distributed systems and big data architecture. If interested send updated resume on rosalin.m@genxhire.in or 8976791986 Share the following details: Current CTC Expected CTC: Notice Period Age Reason for leaving last job

Posted 2 months ago

Apply

3.0 - 5.0 years

10 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Data Pipelines: Proven experience in building scalable and reliable data pipelines BigQuery: Expertise in writing complex SQL transformations; hands-on with indexing and performance optimization Ingestion: Skilled in data scraping and ingestion through RESTful APIs and file-based sources Orchestration: Familiarity with orchestration tools like Prefect, Apache Airflow (nice to have) Tech Stack: Proficient in Python, FastAPI, and PostgreSQL End-to-End Workflows: Capable of owning ingestion, transformation, and delivery processes Location-Remote,Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 2 months ago

Apply

5.0 - 10.0 years

30 - 35 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Work from Office

Good hands on experience working as a GCP Data Engineer with very strong experience in SQL and PySpark. Also on BigQuery, Dataform, Dataplex, etc. Looking for only Immediate to currently serving candidates.

Posted 2 months ago

Apply

2.0 - 7.0 years

40 - 45 Lacs

Chandigarh

Work from Office

Responsibilities: Design and Develop complex data processes in coordination with business stakeholders to solve critical financial and operational processes. Design and Develop ETL/ELT pipelines against traditional databases and distributed systems and to flexibly produce data back to the business and analytics teams for analysis. Work in an agile, fail fast environment directly with business stakeholders and analysts, while recognising data reconciliation and validation requirements. Develop data solutions in coordination with development teams across a variety of products and technologies. Build processes that analyse and monitor data to help maintain controls - correctness, completeness and latency. Participate in design reviews and code reviews Work with colleagues across global locations Troubleshoot and resolve production issues Performance Enhancements Required Skills & Qualifications Programming Skills Python / PySpark / Scala Database Skills Analytical Databases like Snowflakes / SQL Good to have - Elastic Search , Kafka , Nifi , Jupyter Notebooks, Good to have - Knowledge of AWS services like S3 / Glue / Athena / EMR / lambda Requirements Responsibilities: Design and Develop complex data processes in coordination with business stakeholders to solve critical financial and operational processes. Design and Develop ETL/ELT pipelines against traditional databases and distributed systems and to flexibly produce data back to the business and analytics teams for analysis. Work in an agile, fail fast environment directly with business stakeholders and analysts, while recognising data reconciliation and validation requirements. Develop data solutions in coordination with development teams across a variety of products and technologies. Build processes that analyse and monitor data to help maintain controls - correctness, completeness and latency. Participate in design reviews and code reviews Work with colleagues across global locations Troubleshoot and resolve production issues Performance Enhancements Required Skills & Qualifications Programming Skills Python / PySpark / Scala Database Skills Analytical Databases like Snowflakes / SQL Good to have - Elastic Search , Kafka , Nifi , Jupyter Notebooks, Good to have - Knowledge of AWS services like S3 / Glue / Athena / EMR / lambda

Posted 2 months ago

Apply

2.0 - 7.0 years

3 - 7 Lacs

Thane, Navi Mumbai, Mumbai (All Areas)

Work from Office

Job Title: Data Analyst/Engineer Location: Mumbai Experience: 3-4 Years Job Summary: We are seeking a skilled Data Analyst/Engineer with expertise in AWS S3 and Python to manage and process large datasets in a cloud environment. The ideal candidate will be responsible for developing efficient data pipelines, managing data storage, and optimizing data workflows in AWS. Your role will involve using your Python skills to automate data tasks. Key Responsibilities: Python Scripting and Automation: • Develop Python scripts for automating data collection, transformation, and loading into cloud storage systems. • Create robust ETL pipelines to move data between systems and perform data transformation. • Use Python for interacting with AWS services, including S3 and other AWS resources. Data Workflow Optimization: • Design and implement efficient data workflows and pipelines in the AWS cloud environment. • Monitor and optimize data processing to ensure quick and accurate delivery of datasets. • Work closely with other teams to integrate data from various sources into S3 for analysis and reporting. Cloud Services & Data Integration: • Leverage other AWS services (e.g., Lambda, EC2, RDS) to manage and process data in a scalable. • Integrate data sources through APIs, ensuring real-time availability of critical data. Required Skills & Qualifications: • Technical Expertise: Strong experience managing and working with AWS S3 buckets and other AWS services. Advanced proficiency in Python, including experience with libraries such as boto3, Pandas, and others. Hands-on experience building and maintaining ETL pipelines for large datasets • Cloud Technologies: Solid understanding of AWS cloud architecture, including S3, Lambda, and EC2. Experience with AWS IAM (Identity and Access Management) for securing S3 buckets. • Problem Solving & Automation: Proven ability to automate data workflows using Python. Strong analytical and problem-solving skills, with a focus on optimizing data storage and processing. Preferred Qualifications: • Bachelors degree in Computer Science, Data Engineering. • Experience with other AWS services, such as Glue, Redshift, or Athena.

Posted 2 months ago

Apply

3.0 - 5.0 years

0 - 0 Lacs

Hyderabad, Pune, Bangalore Rural

Work from Office

Role & responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!

Posted 2 months ago

Apply

4.0 - 9.0 years

5 - 15 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Senior Data Engineer - Python: Bachelors or master’s degree in computer science, Information Technology, Data Science, or a related field. Must have minimum 4 years of relevant experience Proficient in Python with hands-on experience building ETL pipelines for data extraction, transformation, and validation. Strong SQL skills for working with structured data. Familiar with Grafana or Kibana for data visualization and monitoring/dashboards. Experience with databases such as MongoDB, Elasticsearch, and MySQL. Comfortable working in Linux environments using common Unix tools. Hands-on experience with Git, Docker and virtual machines.

Posted 2 months ago

Apply

7.0 - 12.0 years

5 - 15 Lacs

Hyderabad

Remote

5+ years’ experience with a strong proficiency with SQL query/development skills • Develop ETL routines that manipulate and transfer large volumes of data and perform quality checks • Hands-on experience with ETL tools

Posted 2 months ago

Apply

7.0 - 12.0 years

5 - 15 Lacs

Bengaluru

Remote

Role & responsibilities Responsibilities: Design, develop, and maintain Collibra workflows tailored to our project's specific needs. Collaborate with cross-functional teams to ensure seamless integration of Collibra with other systems. Educate team members on Collibra's features and best practices. (or) Educate oneself on Collibra's features and best practices. Engage with customers to gather requirements and provide solutions that meet their needs. Stay updated with the latest developments in Collibra and data engineering technologies. Must-Haves: Excellent communication skills in English (reading, writing, and speaking). Background in Data Engineering or related disciplines. Eagerness to learn and become proficient in Collibra and its features. Ability to understand and apply Collibra's use cases within the project scope. Nice-to-Haves: Previous experience with Collibra or similar data cataloguing software. Familiarity with workflow design and optimization. Experience in requirement engineering, particularly in customer-facing roles. Knowledge of other cataloguing software and their integration with Collibra. Preferred candidate profile Nice-to-Haves: Previous experience with Collibra or similar data cataloguing software. Familiarity with workflow design and optimization. Experience in requirement engineering, particularly in customer-facing roles. Knowledge of other cataloguing software and their integration with Collibra.

Posted 2 months ago

Apply

10.0 - 20.0 years

15 - 30 Lacs

Noida

Remote

Dear Interviewer, Inviting application for becoming part time remote interviewer on India's largest interview as a service platform. We provide on-demand video interviews service to employers and job-seekers. If you are interested in becoming a part time remote interviewer on our platform .Please review below details and apply using link:- Azure Data Engineer Interviewer Application link Please note only applications that come through the above link will be considered for next steps. You are advised to strictly use above given link to become part of our platform About Risebird: Leading Interview-as-a-Service Platform: Specializes in connecting companies with expert interviewers for technical and non-technical hiring needs. Opportunities for Experts : Ideal for professionals exploring part-time, freelance, or moonlighting opportunities in interviewing. Monetize Idle Time: Enables skilled individuals to earn by conducting interviews during their free hours. Extensive Interviewer Network: Over 30,000 interviewers, From 2,600+ companies Have conducted 5 lakh+ interviews Trusted by Fortune 500 Companies: Preferred platform for many leading enterprises. High Earnings for Interviewers: Over 25+ crores paid to part-time interviewers in the last 5 years. More details on https://risebird.io/ About Interviewer:- 1.Confidentiality and data privacy - Interviewers profile are never shared with customers, never mapped to current company recruiter, never mapped to candidate of interviewers current company 2. Payment flexibility- specific date payments, upfront display of payment for every interview- you can assign only those interviews that suit your payment expectations. 3. 100% remote- all interviews are conducted online and we will never request for offline visits. 4. No unfair deductions- only 10% TDS is deducted and remaining is transferred to your preferred account, 5. TDS certificate is provided too 6. Time flexibility - 6AM -12AM during the weekdays, weekends, it's not forced time schedule, you decide which interviews you want to take ,no fixed rules 7. Easy to use- one time efforts, 15 mins call to share expectation, 5-10 mins max to see portal video. 8. Employment opportunities- Interviewers on our platform get both part-time and fulltime job offers from the quality of interviews they conduct while maintaining confidentiality. Offers are shared with interviewers and only after their approval we connect them back to requester 9. ROI - Continuous part-time income in highly confidential manner for life time of your career along with opportunities to create part-time/full-time employment opportunities

Posted 2 months ago

Apply

10.0 - 20.0 years

15 - 30 Lacs

Noida

Remote

Dear Interviewer, Inviting application for becoming part time remote interviewer on India's largest interview as a service platform. We provide on-demand video interviews service to employers and job-seekers. If you are interested in becoming a part time remote interviewer on our platform .Please review below details and apply using link:- AWS Data Engineer Interviewer-Application Link Please note only applications that come through the above link will be considered for next steps. You are advised to strictly use above given link to become part of our platform About Risebird: Leading Interview-as-a-Service Platform: Specializes in connecting companies with expert interviewers for technical and non-technical hiring needs. Opportunities for Experts : Ideal for professionals exploring part-time, freelance, or moonlighting opportunities in interviewing. Monetize Idle Time: Enables skilled individuals to earn by conducting interviews during their free hours. Extensive Interviewer Network: Over 30,000 interviewers, From 2,600+ companies Have conducted 5 lakh+ interviews Trusted by Fortune 500 Companies: Preferred platform for many leading enterprises. High Earnings for Interviewers: Over 25+ crores paid to part-time interviewers in the last 5 years. More details on https://risebird.io/ About Interviewer:- 1.Confidentiality and data privacy - Interviewers profile are never shared with customers, never mapped to current company recruiter, never mapped to candidate of interviewers current company 2. Payment flexibility- specific date payments, upfront display of payment for every interview- you can assign only those interviews that suit your payment expectations. 3. 100% remote- all interviews are conducted online and we will never request for offline visits. 4. No unfair deductions- only 10% TDS is deducted and remaining is transferred to your preferred account, 5. TDS certificate is provided too 6. Time flexibility - 6AM -12AM during the weekdays, weekends, it's not forced time schedule, you decide which interviews you want to take ,no fixed rules 7. Easy to use- one time efforts, 15 mins call to share expectation, 5-10 mins max to see portal video. 8. Employment opportunities- Interviewers on our platform get both part-time and fulltime job offers from the quality of interviews they conduct while maintaining confidentiality. Offers are shared with interviewers and only after their approval we connect them back to requester 9. ROI - Continuous part-time income in highly confidential manner for life time of your career along with opportunities to create part-time/full-time employment opportunities

Posted 2 months ago

Apply

6.0 - 9.0 years

25 - 35 Lacs

Kochi, Chennai, Bengaluru

Work from Office

Experience Data Engineer (Python, Pyspark, Snowflake)

Posted 2 months ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Bengaluru

Remote

Location - Remote Experience - 6-12 years Immediate Joiners preferred Required Qualifications: Bachelors degree in Computer Science, Information Systems, or a related field. 35 years of experience in data engineering, cloud architecture, or Snowflake administration. Hands-on experience with Snowflake features: Snowpipe, Streams, Tasks, External Tables, and Secure Data Sharing. Proficiency in SQL , Python , and data movement tools (e.g., AWS CLI, Azure Data Factory, Google Cloud Storage Transfer). Experience with data pipeline orchestration tools such as Apache Airflow , dbt , or Informatica . Strong understanding of cloud storage services (S3, Azure Blob, GCS) and working with external stages. Familiarity with network security , encryption , and data compliance best practices

Posted 2 months ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Artify Talent Studio is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 months ago

Apply

1.0 - 5.0 years

4 - 8 Lacs

Gurugram

Work from Office

Job Requirements Someone with 3-6 years of experience running medium to large scale production environments Proven programming/scripting skills in at least one of the language (i.e Python, Java, Scala, Javascript ) Experience with any one of the cloud-based services and infrastructure (AWS, GCP, Azure) Proficiency in writing analytical SQL queries. Experience in building analytical tools that utilize data pipelines to provide key actionable insights. Knowledge of big-data tools like Hadoop, Kafka, Spark etc would be a plus. A proactive approach to spotting problems, areas for improvement, and performance bottlenecks

Posted 2 months ago

Apply

1.0 - 5.0 years

3 - 7 Lacs

Chandigarh

Work from Office

Key Responsibilities Assist in building and maintaining data pipelines on GCP using services like BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc. Support data ingestion, transformation, and storage processes for structured and unstructured datasets. Participate in performance tuning and optimization of existing data workflows. Collaborate with data analysts, engineers, and stakeholders to ensure reliable data delivery. Document code, processes, and architecture for reproducibility and future reference. Debug issues in data pipelines and contribute to their resolution.

Posted 2 months ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Position Overview We are looking for an experienced Data Engineer to join our dynamic team If you are passionate about building scalable software solutions, have expertise in system design and data structures, and are familiar with various databases, we would love to hear from you, ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies We deliver digital solutions built to help accelerate the growth of businesses in various industries, by focusing on creating value through innovation, Job Description Act as the first point of contact for data issues in the Master Data Management (MDM) system, Investigate and resolve data-related issues, such as duplicate data or missing records, ensuring timely and accurate updates, Coordinate with the Product Manager, QA Lead, and Technology Lead to prioritize and address tickets effectively, Work on Data related issues, ensuring compliance with regulations, Build and optimize data models to ensure efficient storage and query performance, including work with Snowflake tables, Write complex SQL queries for data manipulation and retrieval, Collaborate with other teams to diagnose and fix more complex issues that may require code changes or system updates, Utilize AWS resources like CloudWatch, Lambda, SQS, and Kinesis Streams for data storage, transformation, and analysis, Update and maintain the knowledge base to document common issues and their solutions, Monitor system logs and alerts to proactively identify potential issues before they affect customers, Participate in team meetings to provide updates on ongoing issues and contribute to process improvements, Maintain documentation of data engineering processes, data models, and system configurations, Basic Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field, Minimum of 3 years of experience in data engineering, preferably related to MDM systems, Strong expertise in SQL and other database query languages, Hands-on experience with data warehousing solutions and relational database management systems (RDBMS), Proficiency in ETL tools and data pipeline construction, Familiarity with AWS services, Excellent programming skills, preferably in Python, Strong understanding of data privacy regulations like DSAR and CCPA, Good communication skills, both written and verbal, with the ability to articulate complex data concepts to non-technical stakeholders, Strong problem-solving skills and attention to detail, We are proud to offer a competitive salary alongside a strong healthcare insurance and benefits package The role is preferably hybrid, with 3 days per week spent in office We pride ourselves on the growth of our employees, offering extensive learning and development resources, PI267947624

Posted 2 months ago

Apply

1.0 - 5.0 years

3 - 7 Lacs

Gurugram

Work from Office

Key Responsibilities Assist in building and maintaining data pipelines on GCP using services like BigQuery, Dataflow, Pub/Sub, Cloud Storage, etc. Support data ingestion, transformation, and storage processes for structured and unstructured datasets. Participate in performance tuning and optimization of existing data workflows. Collaborate with data analysts, engineers, and stakeholders to ensure reliable data delivery. Document code, processes, and architecture for reproducibility and future reference. Debug issues in data pipelines and contribute to their resolution.

Posted 2 months ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Kolkata, Hyderabad, Bengaluru

Hybrid

With a startup spirit and 115,000+ curious and courageous minds, we have the expertise to go deep with the worlds biggest brands—and we have fun doing it. We dream in digital, dare in reality, and reinvent the ways companies work to make an impact far bigger than just our bottom line. We’re harnessing the power of technology and humanity to create meaningful transformation that moves us forward in our pursuit of a world that works better for people. Now, we’re calling upon the thinkers and doers, those with a natural curiosity and a hunger to keep learning, keep growing., People who thrive on fearlessly experimenting, seizing opportunities, and pushing boundaries to turn our vision into reality. And as you help us create a better world, we will help you build your own intellectual firepower. Welcome to the relentless pursuit of better. Inviting applications for the role of Lead Consultant, AWS DataLake! Responsibilities • Having knowledge on DataLake on AWS services with exposure to creating External Tables and spark programming. The person shall be able to work on python programming. • Writing effective and scalable Python codes for automations, data wrangling and ETL. • Designing and implementing robust applications and work on Automations using python codes. • Debugging applications to ensure low-latency and high-availability. • Writing optimized custom SQL queries • Experienced in team and client handling • Having prowess in documentation related to systems, design, and delivery. • Integrate user-facing elements into applications • Having the knowledge of External Tables, Data Lake concepts. • Able to do task allocation, collaborate on status exchanges and getting things to successful closure. • Implement security and data protection solutions • Must be capable of writing SQL queries for validating dashboard outputs • Must be able to translate visual requirements into detailed technical specifications • Well versed in handling Excel, CSV, text, json other unstructured file formats using python. • Expertise in at least one popular Python framework (like Django, Flask or Pyramid) • Good understanding and exposure on any Git, Bamboo, Confluence and Jira. • Good in Dataframes and SQL ANSI using pandas. • Team player, collaborative approach and excellent communication skills Qualifications we seek in you! Minimum Qualifications •BE/B Tech/ MCA •Excellent written and verbal communication skills •Good knowledge of Python, Pyspark Preferred Qualifications/ Skills Strong ETL knowledge on any ETL tool – good to have. Good to have knowledge on AWS cloud and Snowflake. Having knowledge of PySpark is a plus. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 2 months ago

Apply

3.0 - 8.0 years

20 - 30 Lacs

Chennai

Hybrid

Job Title: Senior Data Engineer Data Products Location: Chennai, India Open Roles: 2 Mode: Hybrid About the Role Are you a hands-on data engineer who thrives on solving complex data challenges and building modern cloud-native solutions? We're looking for two experienced Senior Data Engineers to join our growing Data Engineering team. This is an exciting opportunity to work on cutting-edge data platform initiatives that power advanced analytics, AI solutions, and digital transformation across a global enterprise. In this role, you'll help design and build reusable, scalable, and secure data pipelines on a multi-cloud infrastructure, while collaborating with cross-functional teams in a highly agile environment. What You’ll Do Design and build robust data pipelines and ETL frameworks using modern tools and cloud platforms. Implement lakehouse architecture (Bronze/Silver/Gold layers) and support data product publishing via Unity Catalog. Work with structured and unstructured enterprise data including ERP, CRM, and product data systems. Optimize pipeline performance, reliability, and security across AWS and Azure environments. Automate infrastructure using IaC tools like Terraform and AWS CDK. Collaborate closely with data scientists, analysts, and platform teams to deliver actionable data products. Participate in agile ceremonies, conduct code reviews, and contribute to team knowledge sharing. Ensure compliance with data privacy, cybersecurity, and governance policies. What You Bring 3+ years of hands-on experience in data engineering roles. Strong command of SQL and Python ; experience with Scala is a plus. Proficiency in cloud platforms (AWS, Azure), Databricks , DBT , Airflow , and version control tools like GitLab . Hands-on experience implementing lakehouse architectures and multi-hop data flows using Delta Lake . Background in working with enterprise data systems like SAP, Salesforce, and other business-critical platforms. Familiarity with DevOps , DataOps , and agile delivery methods (Jira, Confluence). Strong understanding of data security , privacy compliance , and production-grade pipeline management. Excellent communication skills and ability to work in global, multicultural teams. Why Join Us? Opportunity to work with modern data technologies in a complex, enterprise-scale environment. Be part of a collaborative, forward-thinking team that values innovation and continuous learning. Hybrid work model that offers both flexibility and team engagement . A role where you can make a real impact by contributing to digital transformation and data-driven decision-making.

Posted 2 months ago

Apply

8.0 - 13.0 years

22 - 25 Lacs

Hyderabad

Work from Office

Role & responsibilities Design, build and maintain complex ELT jobs that deliver business value • Translate high-level business requirements into technical specs • Ingest data from disparate sources into the data lake and data warehouse • Cleanse and enrich data and apply adequate data quality controls • Develop re-usable tools to help streamline the delivery of new projects • Collaborate closely with other developers and provide mentorship • Evaluate and recommend tools, technologies, processes and reference architectures • Work in an Agile development environment, attending daily stand-up meetings and delivering incremental improvements Basic Qualifications • Bachelors degree in computer science, engineering or a related field • Data: 5+ years of experience with data analytics and warehousing • SQL: Deep knowledge of SQL and query optimization • ELT: Good understanding of ELT methodologies and tools • Troubleshooting: Experience with troubleshooting and root cause analysis to determine and remediate potential issues • Communication: Excellent communication, problem solving and organizational and analytical skills • Able to work independently and to provide leadership to small teams of developers Preferred Qualifications • Master’s degree in computer science or engineering or a related field • Cloud: Experience working in a cloud environment (e.g. AWS) • Python: Hands on experience developing with Python • Advanced Data Processing: Experience using data processing technologies such as Apache Spark or Kafka • Workflow: Good knowledge of orchestration and scheduling tools (e.g. Apache Airflow) • Reporting: Experience with data reporting (e.g. Microstrategy, Tableau, Looker) and data cataloging tools (e.g. Alation) Preferred candidate profile

Posted 2 months ago

Apply

4.0 - 7.0 years

8 - 15 Lacs

Hyderabad

Hybrid

We are seeking a highly motivated Senior Data Engineer OR Data Engineer within Envoy Global's tech team to join us on a full time, permanent basis. This role is responsible for designing, developing, and documenting data pipelines and ETL jobs to enable data migration, data integration and data warehousing. That includes ETL jobs, reports, dashboards and data pipelines. The person in this role will work closely with Data Architect, BI & Analytics team and Engineering teams to deliver data assets for Data Security, DW and Analytics. As our Senior Data Engineer OR Data Engineer, you will be required to: Design, build, test and maintain cloud-based data pipelines to acquire, profile, cleanse, consolidate, transform, integrate data Design and develop ETL processes for the Data Warehouse lifecycle (staging of data, ODS data integration, EDW and data marts) and Data Security (Data archival, Data obfuscation, etc.). Build complex SQL queries on large datasets and performance tune as needed Design and develop data pipelines and ETL jobs using SSIS and Azure Data Factory Maintain ETL packages and supporting data objects for our growing BI infrastructure Carry out monitoring, tuning, and database performance analysis Facilitate integration of our application with other systems by developing data pipelines Prepare key documentation to support the technical design in technical specifications Collaborate and work alongside with other technical professionals (BI Report developers, Data Analysts, Architect) Communicate clearly and effectively with stakeholders To apply for this role, you should possess the following skills, experience and qualifications: Design, Develop, and Document Data Pipelines and ETL Jobs: Create and maintain robust data pipelines and ETL (Extract, Transform, Load) processes to support data migration, integration, and warehousing. Data Assets Delivery: Collaborate with Data Architects, BI & Analytics teams, and Engineering teams to deliver high-quality data assets for data security, data warehousing (DW), and analytics. ETL Jobs, Reports, Dashboards, and Data Pipelines: Develop and manage ETL jobs, generate reports, create dashboards, and ensure the smooth operation of data pipelines. 3+ years of experience as a SSIS ETL developer, Data Engineer or a related role 2+ years of experience using Azure Data Factory Knowledgeable in Data Modelling and Data warehouse concepts Experience working with Azure stack Demonstrated ability to write SQL/TSQL queries to retrieve/modify data Knowledge and know-how to troubleshoot potential issues, and experience with best practices around database operations Ability to work in an Agile environment Should you have a deep passion for technology and a desire to thrive in a rapidly evolving and creative environment, we would be delighted to receive your application. Please provide your updated resume, highlighting your relevant experience and the reasons you believe you would be a valuable member of our team. We look forward to reviewing your subm

Posted 2 months ago

Apply

1.0 - 5.0 years

7 - 15 Lacs

Pune

Work from Office

Hi All, Please find the mandatory skills based on the job description for Data Engineer (Grade I & J) , here are the mandatory (essential) skills required: Mandatory Skills Data Infrastructure & Engineering Designing, building, productionizing, and maintaining scalable and reliable data infrastructure and data products. Experience with data modeling, pipeline idempotency, and operational observability. Programming Languages Proficiency in one or more object-oriented programming languages such as: Python Scala Java C# Database Technologies Strong experience with: SQL and NoSQL databases Query structures and design best practices Scalability, readability, and reliability in database design Distributed Systems Experience implementing large-scale distributed systems in collaboration with senior team members. Software Engineering Best Practices Technical design and reviews Unit testing, monitoring, and alerting Code versioning, code reviews, and documentation CI/CD pipeline development and maintenance Security & Compliance Deploying secure and well-tested software and data assets Meeting privacy and compliance requirements Site Reliability Engineering Service reliability, on-call rotations, defining and maintaining SLAs Infrastructure as code and containerized deployments Communication & Collaboration Strong verbal and written communication skills Ability to work in cross-disciplinary teams Mindset & Education Continuous learning and improvement mindset BS degree in Computer Science or related field (or equivalent experience) Thanks & Regards Sushma Patil HR Cordinator sushma.patil@in.experis.com

Posted 2 months ago

Apply

6.0 - 10.0 years

16 - 31 Lacs

Bhubaneswar, Pune, Bengaluru

Work from Office

About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title : Data Engineer Experience : 6 to 10 years Key Responsibilities : Design, develop, and maintain large-scale batch and real-time data processing systems using PySpark and Scala. Build and manage streaming pipelines using Apache Kafka. Work with structured and semi-structured data sources including MongoDB, flat files, APIs, and relational databases. Optimize and scale data pipelines to handle large volumes of data efficiently. Implement data quality, data governance, and monitoring frameworks. Collaborate with data scientists, analysts, and other engineers to support various data initiatives. Develop and maintain robust, reusable, and well-documented data engineering solutions. Troubleshoot production issues, identify root causes, and implement fixes. Stay up to date with emerging technologies in the big data and streaming space Technical Skills : 6 to 10 years of experience in Data Engineering or a similar role. Strong hands-on experience with Apache Spark (PySpark) and Scala . Proficiency in designing and managing Kafka streaming architectures. Experience with MongoDB , including indexing, aggregation, and schema design. Solid understanding of distributed computing , ETL/ELT processes , and data warehousing concepts . Experience with cloud platforms (AWS, Azure, or GCP) is a strong plus. Strong programming and scripting skills (Python, Scala, or Java). Familiarity with workflow management tools like Airflow , Luigi , or similar is a plus. Excellent problem-solving skills and ability to work independently or within a team. Strong communication skills and the ability to collaborate effectively across teams. Notice period : 30,45,60,90 days Location : Bhubaneswar ,Bangalore, Pune Mode of Work :WFO(Work From Office) Thanks & Regards, SWETHA Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,INDIA. Contact Number:8067432433 rathy@blackwhite.in |www.blackwhite.in

Posted 2 months ago

Apply

2.0 - 5.0 years

2 - 4 Lacs

Mumbai, Mumbai Suburban, Mumbai (All Areas)

Work from Office

Role & responsibilities 3 to 4+ years of hands-on experience in SQL database design, data architecture, ETL, Data Warehousing, Data Mart, Data Lake, Big Data, Cloud and Data Governance domains. • Take ownership of the technical aspects of implementing data pipeline & migration requirements, ensuring that the platform is being used to its fullest potential through designing and building applications around business stakeholder needs. • Interface directly with stakeholders to gather requirements and own the automated end-to-end data engineering solutions. • Implement data pipelines to automate the ingestion, transformation, and augmentation of both structured, unstructured, real-time data, and provide best practices for pipeline operations • Troubleshoot and remediate data quality issues raised by pipeline alerts or downstream consumers. Implement Data Governance best practices. • Create and maintain clear documentation on data models/schemas as well as transformation/validation rules. • Implement tools that help data consumers to extract, analyze, and visualize data faster through data pipelines. • Implement data security, privacy, and compliance protocols to ensure safe data handling in line with regulatory requirements. • Optimize data workflows and queries to ensure low latency, high throughput, and cost efficiency. • Leading the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation for batch ETL's. • Work directly with our internal product/technical teams to ensure that our technology infrastructure is seamlessly and effectively integrated • Migrate current data applications & pipelines to Cloud leveraging technologies in future Preferred candidate profile • Graduate with Engineering Degree (CS/Electronics/IT) / MCA / MCS or equivalent with substantial data engineering experience. • 3+ years of recent hands-on experience with a modern programming language (Scala, Python, Java) is required; Spark/ Pyspark is preferred. • Experience with configuration management and version control apps (ie: Git) and experience working within a CI/CD framework is a plus. • 3+ years of recent hands-on SQL programming experience in a Big Data environment is required. • Working knowledge of PostgreSQL, RDBMS, NoSQL and columnar databases. • Experience developing and maintaining ETL applications and data pipelines using big data technologies is required; Apache Kafka, Spark, Airflow experience is a must. • Knowledge of API and microservice integration with applications. • Experience with containerization (e.g., Docker) and orchestration (e.g., Kubernetes). • Experience building data solutions for Power BI and Web visualization applications. • Experience with Cloud is a plus. • Experience in managing multiple projects and stakeholders with excellent communication and interpersonal skills. • Ability to develop and organize high-quality documentation. • Superior analytical skills and a strong sense of ownership in your work. • Collaborate with data scientists on several projects. Contribute to development and support of analytics including AI/ML. • Ability to thrive in a fast-paced environment, and to manage multiple, competing priorities simultaneously. • Prior Energy & Utilities industry experience is a big plus. Experience (Min. Max. in yrs.): 3+ years of core/relevant experience Location: Mumbai (Onsite)

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies