Jobs
Interviews

271 Data Engineer Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

coimbatore, tamil nadu

On-site

As a Data Engineer at our IT Services Organization, you will be responsible for developing and maintaining scalable data processing systems using Apache Spark and Python. Your role will involve designing and implementing Big Data solutions that integrate data from various sources, including RDBMS, NoSQL databases, and cloud services. Additionally, you will lead a team of data engineers to ensure efficient project execution and adherence to best practices. Your key responsibilities will include optimizing Spark jobs for performance and scalability, collaborating with cross-functional teams to gather requirements, and delivering data solutions that meet business needs. You will also be involved in implementing ETL processes and frameworks to facilitate data integration and utilizing cloud data services such as GCP for data storage and processing. Applying Agile methodologies to manage project timelines and deliverables will be an essential part of your role. To excel in this position, you should have proficiency in Pyspark and Apache Spark, along with a strong knowledge of Python for data engineering tasks. Hands-on experience with Google Cloud Platform (GCP) and expertise in designing and optimizing Big Data pipelines are crucial. Leadership skills in data engineering team management, understanding of ETL frameworks and distributed computing, familiarity with cloud-based data services, and experience with Agile delivery are also required. We are looking for candidates with a Bachelor's degree in Computer Science, Information Technology, or a related field. It is essential to stay updated with the latest trends and technologies in Big Data and cloud computing to contribute effectively to our projects. If you are passionate about data engineering and eager to work in a dynamic and innovative environment, we encourage you to apply for this exciting opportunity.,

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

You will be responsible for working as an AWS Data Engineer at YASH Technologies. Your role will involve performing tasks related to data collection, processing, storage, and integration. It is essential to have proficiency in data Extract-Transform-Load (ETL) processes, data pipeline setup, as well as knowledge of database and data warehouse technologies on the AWS cloud platform. Prior experience in handling timeseries and unstructured data types, such as image data, is a necessary requirement for this position. Additionally, you should have experience in developing data analytics software on the AWS cloud, either as a full-stack or back-end developer. Skills in software quality assessment, testing, and API integration are also crucial for this role. Working at YASH, you will have the opportunity to build a career in a supportive and inclusive team environment. The company focuses on continuous learning and growth by providing career-oriented skilling models and utilizing technology for upskilling and reskilling activities. You will be part of a Hyperlearning workplace that is grounded on the principles of flexible work arrangements, emotional positivity, self-determination, trust, transparency, open collaboration, and providing support for achieving business goals. YASH Technologies offers stable employment with a great atmosphere and an ethical corporate culture.,

Posted 2 days ago

Apply

8.0 - 13.0 years

30 - 45 Lacs

Pune

Work from Office

OSIsoft PI System Extensive hands-on experience with the OSIsoft PI System (PI Data Archive, AF, PI Vision) is mandatory. Data Engineer, with a strong focus on industrial data or IoT. SQL python , PowerShell

Posted 3 days ago

Apply

9.0 - 14.0 years

7 - 14 Lacs

Hyderabad, Pune

Hybrid

Role & responsibilities Key Skills Required are 8 years of handson experience in cloud application architecture with a focus on creating scalable and reliable software systems 8 Years Experience using Google Cloud Platform GCP including but not restricting to services like Bigquery Cloud SQL Fire store Cloud Composer Experience on Security identity and access management Networking protocols such as TCPIP and HTTPS Network security design including segmentation encryption logging and monitoring Network topologies load balancing and segmentation Python for Rest APIs and Microservices Design and development guidance Python with GCP Cloud SQLPostgreSQL BigQuery Integration of Python API to FE applications built on React JS Unit Testing frameworks Python unit test pytest Java junit spock and groovy DevOps automation process like Jenkins Docker deployments etc Code Deployments on VMs validating an overall solution from the perspective of Infrastructure performance scalability security capacity and create effective mitigation plans Automation technologies Terraform or Google Cloud Deployment Manager Ansible Implementing solutions and processes to manage cloud costs Experience in providing solution to Web Applications Requirements and Design knowledge React JS Elastic Cache GCP IAM Managed Instance Group VMs and GKE Owning the endtoend delivery of solutions which will include developing testing and releasing Infrastructure as Code Translate business requirementsuser stories into a practical scalable solution that leverages the functionality and best practices of the HSBC Executing technical feasibility assessments solution estimations and proposal development for moving identified workloads to the GCP Designing and implementing secure scalable and innovative solutions to meet Banks requirements Ability to interact and influence across all organizational levels on technical or business solutions Certified Google Cloud Architect would be an addon Create and own scaling capacity planning configuration management and monitoring of processes and procedures Create put into practice and use cloudnative solutions Lead the adoption of new cloud technologies and establish best practices for them Experience establishing technical strategy and architecture at the enterprise level Experience leading GCP Cloud project delivery Collaborate with IT security to monitor cloud privacy Architecture DevOps data and integration teams to ensure best practices are followed throughout cloud adoption Respond to technical issues and provide guidance to technical team Skills Mandatory Skills : GCP Storage,GCP BigQuery,GCP DataProc,GCP Vertex AI,GCP Spanner,GCP Dataprep,GCP Datastream,Google Analytics Hub,GCP Dataform,GCP Dataplex/Catalog,GCP Cloud Datastore/Firestore,GCP Datafusion,GCP Pub/Sub,GCP Cloud SQL,GCP Cloud Composer,Google Looker,GCP Cloud Datastore,GCP Data Architecture,Google Cloud IAM,GCP Bigtable,GCP Looker1,GCP Data Flow,GCP Cloud Pub/Sub"

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

As a Data Analyst with Market Research and Web Scraping skills at our company located in Udyog Vihar Phase-1, Gurgaon, you will be expected to leverage your 2-5 years of experience in data analysis, particularly in competitive analysis and market research within the Fashion/garment/apparel industry. A Bachelor's degree in Data Science, Computer Science, Statistics, Business Analytics, or related field is required, while advanced degrees or certifications in data analytics or market research are considered a plus. Your main responsibility will be to analyze large datasets to identify trends, patterns, and insights related to market trends and competitor performance. You will conduct quantitative and qualitative analyses to support decision-making in product development and strategy. Additionally, you will be involved in performing in-depth market research to track competitor performance, emerging trends, and customer preferences. Furthermore, you will design and implement data scraping solutions to gather competitor data from websites, ensuring compliance with legal standards and respect of website terms of service. Creating and maintaining organized databases with market and competitor data for easy access and retrieval will be part of your routine, along with collaborating closely with cross-functional teams to align data insights with company objectives. To excel in this role, you should have proven experience with data scraping tools such as BeautifulSoup, Scrapy, or Selenium, proficiency in SQL, Python, or R for data analysis and data manipulation, and experience with data visualization tools like Tableau, Power BI, or D3.js. Strong analytical skills and the ability to interpret data to draw insights and make strategic recommendations are essential. If you are passionate about data analysis, market research, and web scraping and possess the technical skills and analytical mindset required, we encourage you to apply by sending your updated resume with current salary details to jobs@glansolutions.com. For any inquiries, please contact Satish at 8802749743 or visit our website at www.glansolutions.com. Join us on this exciting journey of leveraging data to drive strategic decisions and make a meaningful impact in the Fashion/garment/apparel industry.,

Posted 3 days ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Hyderabad

Work from Office

Greetings from Technogen !!! We thank you for taking time about your competencies and skills, while allowing us an opportunity to explain about us and our Technogen, we understand that your experience and expertise are relevant the current open with our clients. About Technogen : TechnoGen Brief Overview:- TechnoGen, Inc. is an ISO 9001:2015, ISO 20000-1:2011, ISO 27001:2013, and CMMI Level 3 Global IT Services Company headquartered in Chantilly, Virginia. TechnoGen, Inc. (TGI) is a Minority & Women-Owned Small Business with over 20 years of experience providing end-to-end IT Services and Solutions to the Public and Private sectors. TGI provides highly skilled and certied professionals and has successfully executed more than 345 projects. TechnoGen is committed to helping our clients solve complex problems and achieve their goals, on time and under budget. LinkedIn: https://www.linkedin.com/company/technogeninc/about/ Job Title :Data Engineer IT Quality Required Experience : 4+ years Location : Hyderabad. JD summary: Job Summary: We are looking for a proactive and technically skilled Data Engineer to lead data initiatives and provide application support for Quality, Consumer Services and Sustainability domains. The Data Engineer in the Quality Area is responsible for designing, developing, and maintaining data integration solutions to support quality processes. This role focuses on leveraging ETL tools such as Informatica Cloud, Ascend, Google Cloud Dataflow, and Composer, along with Python, Spark programming, to ensure seamless data flow, transformation, and integration across quality systems. The position is offsite and requires collaboration with business partners and IT teams to deliver end-to-end data solutions that meet regulatory and business requirements. The candidate must be willing to work on site 4 days a week in Hyderabad, during US EST time zone. Key Responsibilities: Data Integration and ETL Development: Design and implement robust ETL pipelines using tools like Informatica Cloud, Ascend, Google Big Query, Google Cloud Dataflow, and Composer to integrate data from quality systems (e.g., Veeva Vault, QMS, GBQ, PLM, Order Management systems). Develop and optimize data transformation workflows to ensure accurate, timely, and secure data processing. Use Python for custom scripting, data manipulation, and automation of ETL processes. Data Pipeline Support and Maintenance: Monitor, troubleshoot, and resolve issues in data pipelines, ensuring high availability and performance. Implement hotfixes, enhancements, and minor changes to existing ETL workflows to address defects or evolving business needs. Ensure data integrity, consistency, and compliance with regulatory standards Collaboration and Stakeholder Engagement: Work closely with quality teams, business analysts, and IT stakeholders to gather requirements and translate them into technical data solutions. Collaborate with cross-functional teams to integrate quality data with other enterprise systems, such as PLM, QMS, ERP, or LIMS. Communicate effectively with remote teams to provide updates, resolve issues, and align project deliverables. Technical Expertise: Maintain proficiency in ETL tools (Informatica Cloud, Ascend, Dataflow, Composer, GBQ) and Python for data engineering tasks. Design scalable and efficient data models to support quality reporting, analytics, and compliance requirements. Implement best practices for data security, version control, and pipeline orchestration. Documentation and Training: Create and maintain detailed documentation for ETL processes, data flows, and system integrations. Provide guidance and training to junior team members or end-users on data integration processes and tools. Qualifications: Education: Bachelor's degree in computer science, Data Engineering, Information Systems, or a related field Experience: 4+ years of experience as a Data Engineer, with a focus on data integration in quality or regulated environments. Hands-on experience with ETL tools such as Informatica Cloud, Ascend, Google Cloud Dataflow, and Composer. Proficiency in Python for data processing, scripting, and automation. Experience working with Veeva application a plus. Technical Skills: Expertise in designing and optimizing ETL pipelines using Informatica Cloud, Ascend, Dataflow, or Composer. Strong Python programming skills for data manipulation, automation, and integration. Familiarity with cloud platforms (e.g., Google Cloud, AWS, Azure) and data integration patterns (e.g., APIs, REST, SQL). Knowledge of database systems (e.g., SQL Server, Oracle, BigQuery) and data warehousing concepts. Experience with Agile methodologies and tools like JIRA or Azure DevOps. Soft Skills: Excellent communication and collaboration skills to work effectively with remote teams and business partners. Problem-Solving: Strong problem-solving and analytical skills to address complex data integration challenges. Ability to manage multiple priorities and deliver high-quality solutions in a fast-paced environment. Cultural Awareness: Ability to work effectively in a multicultural environment and manage teams across different time zones. Preferred Qualifications: Experience working in regulated environments Advanced degrees or certifications (e.g., Informatica Cloud, Google Cloud Professional Data Engineer) are a plus. Experience with Agile or hybrid delivery models. About Us: We are a leading organization committed to leveraging technology to drive business success. Our team is dedicated to innovation, collaboration, and delivering exceptional results. Join us and be a part of a dynamic and forward-thinking company. How to Apply: Interested candidates are invited to submit their resume and cover letter detailing their relevant experience and qualifications. Best Regards, Syam.M | Sr.IT Recruiter syambabu.m@technogenindia.com www.technogenindia.com | Follow us on LinkedIn

Posted 3 days ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

NTT DATA is looking for a Databricks Developer to join their team in Bangalore, Karnataka, India. As a Databricks Developer, your responsibilities will include pushing data domains into a massive repository and building a large data lake by highly leveraging Databricks. To be considered for this role, you should have at least 3 years of experience in a Data Engineer or Software Engineer role. An undergraduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field is required, while a graduate degree is preferred. You should also have experience with data pipeline and workflow management tools, advanced working SQL knowledge, and familiarity with relational databases. Additionally, an understanding of Datawarehouse (DWH) systems, ELT and ETL patterns, data models, and transforming data into various models is essential. You should be able to build processes supporting data transformation, data structures, metadata, dependency and workload management. Experience with message queuing, stream processing, and highly scalable big data data stores is also necessary. Preferred qualifications include experience with Azure cloud services such as ADLS, ADF, ADLA, and AAS. The role also requires a minimum of 2 years of experience in relevant skills. NTT DATA is a trusted global innovator of business and technology services with a commitment to helping clients innovate, optimize, and transform for long-term success. They serve 75% of the Fortune Global 100 and have a diverse team of experts in more than 50 countries. As a Global Top Employer, NTT DATA offers services in business and technology consulting, data and artificial intelligence, industry solutions, and the development, implementation, and management of applications, infrastructure, and connectivity. They are known for providing digital and AI infrastructure solutions and are part of the NTT Group, investing over $3.6 billion each year in R&D to support organizations and society in moving confidently into the digital future. Visit their website at us.nttdata.com for more information.,

Posted 3 days ago

Apply

10.0 - 17.0 years

12 - 17 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Work from Office

POSITION OVERVIEW: We are seeking an experienced and highly skilled Data Engineer with deep expertise in Microsoft Fabric , MS-SQL, data warehouse architecture design , and SAP data integration. The ideal candidate will be responsible for designing, building, and optimizing data pipelines and architectures to support our enterprise data strategy. The candidate will work closely with cross-functional teams to ingest, transform, and make data (from SAP and other systems) available in our Microsoft Azure environment, enabling robust analytics and business intelligence. KEY ROLES & RESPONSIBILITIES : Spearhead the design, development, deployment, testing, and management of strategic data architecture, leveraging cutting-edge technology stacks on cloud, on-prem and hybrid environments Design and implement an end-to-end data architecture within Microsoft Fabric / SQL, including Azure Synapse Analytics (incl. Data warehousing). This would also encompass a Data Mesh Architecture. Develop and manage robust data pipelines to extract, load, and transform data from SAP systems (e.g., ECC, S/4HANA, BW). Perform data modeling and schema design for enterprise data warehouses in Microsoft Fabric. Ensure data quality, security, and compliance standards are met throughout the data lifecycle. Enforce Data Security measures, strategies, protocols, and technologies ensuring adherence to security and compliance requirements Collaborate with BI, analytics, and business teams to understand data requirements and deliver trusted datasets. Monitor and optimize performance of data processes and infrastructure. Document technical solutions and develop reusable frameworks and tools for data ingestion and transformation. Establish and maintain robust knowledge management structures, encompassing Data Architecture, Data Policies, Platform Usage Policies, Development Rules, and more, ensuring adherence to best practices, regulatory compliance, and optimization across all data processes Implement microservices, APIs and event-driven architecture to enable agility and scalability. Create and maintain architectural documentation, diagrams, policies, standards, conventions, rules and frameworks to effective knowledge sharing and handover. Monitor and optimize the performance, scalability, and reliability of the data architecture and pipelines. Track data consumption and usage patterns to ensure that infrastructure investment is effectively leveraged through automated alert-driven tracking. KEY COMPETENCIES: Microsoft Certified: Fabric Analytics Engineer Associate or equivalent certificate for MS SQL. Prior experience working in cloud environments (Azure preferred). Understanding of SAP data structures and SAP integration tools like SAP Data Services, SAP Landscape Transformation (SLT), or RFC/BAPI connectors. Experience with DevOps practices and version control (e.g., Git). Deep understanding of SAP architecture, data models, security principles, and platform best practices. Strong analytical skills with the ability to translate business needs into technical solutions. Experience with project coordination, vendor management, and Agile or hybrid project delivery methodologies. Excellent communication, stakeholder management, and documentation skills. Strong understanding of data warehouse architecture and dimensional modeling. Excellent problem-solving and communication skills. QUALIFICATIONS / EXPERIENCE / SKILLS Qualifications : Bachelors degree in Computer Science, Information Systems, or a related field. Certifications such as SQL, Administrator, Advanced Administrator, are preferred. Expertise in data transformation using SQL, PySpark, and/or other ETL tools. Strong knowledge of data governance, security, and lineage in enterprise environments. Advanced knowledge in SQL, database procedures/packages and dimensional modeling Proficiency in Python, and/or Data Analysis Expressions (DAX) (Preferred, not mandatory) Familiarity with PowerBI for downstream reporting (Preferred, not mandatory). Experience : • 10 years of experience as a Data Engineer or in a similar role. Skills: Hands-on experience with Microsoft SQL (MS-SQL), Microsoft Fabric including Synapse (Data Warehousing, Notebooks, Spark) Experience integrating and extracting data from SAP systems, such as: o SAP ECC or S/4HANA SAP BW o SAP Core Data Services (CDS) Views or OData Services Knowledge of Data Protection laws across countries (Preferred, not mandatory)

Posted 4 days ago

Apply

5.0 - 10.0 years

40 - 50 Lacs

Bengaluru

Remote

INTERESTED CANDIDATES SHARE CV TO VAIJAYANTHI.M@PARAMINFO.COM Exp: 5-10 Years Notice: Max 30 Days Location: Pan India (Remote - Work From Home) Domain: Core Banking is Must Must Required Skills: Cloudera Data Platform (CDP) hands-on experience Strong programming in: Python, PySpark Workflow orchestration: Apache Airflow ETL Development: Batch and streaming pipelines DevOps Practices: CI/CD, version control, automation Data Governance & Quality: Security, validation, alerting Nice-to-Have / Preferred: AI/ML & Generative AI exposure Use case implementation/support Experience with ML workflows, model pipelines Familiarity with cloud-native data tools (Azure/AWS) Collaboration in cross-functional Agile teams Job Description: We are seeking a highly skilled and motivated Data Engineer to join our dynamic technology team. The ideal candidate will have deep expertise in data engineering tools and platforms, particularly Apache Airflow, PySpark, and Python, with hands-on experience in Cloudera Data Platform (CDP). A strong understanding of DevOps practices and exposure to AI/ML and Generative AI use cases is highly desirable. Key Responsibilities: Design, build, and maintain scalable data pipelines using Python, PySpark and Airflow. Develop and optimize ETL workflows on Cloudera Data Platform (CDP). Implement data quality checks, monitoring, and alerting mechanisms. Ensure data security, governance, and compliance across all pipelines. Work closely with cross-functional teams to understand data requirements and deliver solutions. Troubleshoot and resolve issues in production data pipelines. Contribute to the architecture and design of the data platform. Collaborate with engineering teams and analysts to work on AI/ML and Gen AI use cases. Automate deployment and monitoring of data workflows using DevOps tools and practices. Stay updated with the latest trends in data engineering, AI/ML, and Gen AI technologies. INTERESTED CANDIDATES SHARE CV TO VAIJAYANTHI.M@PARAMINFO.COM

Posted 4 days ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Job title: Senior Software Engineer Experience: 5- 8 years Primary skills: Python, Spark or Pyspark, DWH ETL. Database: SparkSQL or PostgreSQL Secondary skills: Databricks ( Delta Lake, Delta tables, Unity Catalog) Work Model: Hybrid (Weekly Twice) Cab Facility: Yes Work Timings: 10am to 7pm Interview Process: 3 rounds (3rd round F2F Mandatory) Work Location: Karle Town Tech Park Nagawara, Hebbal Bengaluru 560045 About Business Unit: The Architecture Team plays a pivotal role in the end-to-end design, governance, and strategic direction of product development within Epsilon People Cloud (EPC). As a centre of technical excellence, the team ensures that every product feature is engineered to meet the highest standards of scalability, security, performance, and maintainability. Their responsibilities span across architectural ownership of critical product features, driving techno-product leadership, enforcing architectural governance, and ensuring systems are built with scalability, security, and compliance in mind. They design multi cloud and hybrid cloud solutions that support seamless integration across diverse environments and contribute significantly to interoperability between EPC products and the broader enterprise ecosystem. The team fosters innovation and technical leadership while actively collaborating with key partners to align technology decisions with business goals. Through this, the Architecture Team ensures the delivery of future-ready, enterprise-grade, efficient and performant, secure and resilient platforms that form the backbone of Epsilon People Cloud. Why we are looking for you: You have experience working as a Data Engineer with strong database fundamentals and ETL background. You have experience working in a Data warehouse environment and dealing with data volume in terabytes and above. You have experience working in relation data systems, preferably PostgreSQL and SparkSQL. You have excellent designing and coding skills and can mentor a junior engineer in the team. You have excellent written and verbal communication skills. You are experienced and comfortable working with global clients You work well with teams and are able to work with multiple collaborators including clients, vendors and delivery teams. You are proficient with bug tracking and test management toolsets to support development processes such as CI/CD. What you will enjoy in this role: As part of the Epsilon Technology practice, the pace of the work matches the fast-evolving demands in the industry. You will get to work on the latest tools and technology and deal with data of petabyte-scale. Work on homegrown frameworks on Spark and Airflow etc. Exposure to Digital Marketing Domain where Epsilon is a marker leader. Understand and work closely with consumer data across different segments that will eventually provide insights into consumer behaviour's and patterns to design digital Ad strategies. As part of the dynamic team, you will have opportunities to innovate and put your recommendations forward. Using existing standard methodologies and defining as per evolving industry standards. Opportunity to work with Business, System and Delivery to build a solid foundation on Digital Marketing Domain. The open and transparent environment that values innovation and efficiency Click here to view how Epsilon transforms marketing with 1 View, 1 Vision and 1 Voice. What will you do? Develop a deep understanding of the business context under which your team operates and present feature recommendations in an agile working environment. Lead, design and code solutions on and off database for ensuring application access to enable data-driven decision making for the company's multi-faceted ad serving operations. Working closely with Engineering resources across the globe to ensure enterprise data warehouse solutions and assets are actionable, accessible and evolving in lockstep with the needs of the ever-changing business model. This role requires deep expertise in spark and strong proficiency in ETL, SQL, and modern data engineering practices. Design, develop, and manage ETL/ELT pipelines in Databricks using PySpark/SparkSQL, integrating various data sources to support business operations Lead in the areas of solution design, code development, quality assurance, data modelling, business intelligence. Mentor Junior engineers in the team. Stay abreast of developments in the data world in terms of governance, quality and performance optimization. Able to have effective client meetings, understand deliverables, and drive successful outcomes. Qualifications: Bachelor's Degree in Computer Science or equivalent degree is required. 5 - 8 years of data engineering experience with expertise using Apache Spark and Databases (preferably Databricks) in marketing technologies and data management, and technical understanding in these areas. Monitor and tune Databricks workloads to ensure high performance and scalability, adapting to business needs as required. Solid experience in Basic and Advanced SQL writing and tuning. Experience with Python Solid understanding of CI/CD practices with experience in Git for version control and integration for spark data projects. Good understanding of Disaster Recovery and Business Continuity solutions Experience with scheduling applications with complex interdependencies, preferably Airflow Good experience in working with geographically and culturally diverse teams. Understanding of data management concepts in both traditional relational databases and big data lakehouse solutions such as Apache Hive, AWS Glue or Databricks. Excellent written and verbal communication skills. Ability to handle complex products. Good communication and problem-solving skills, with the ability to manage multiple priorities. Ability to diagnose and solve problems quickly. Diligent, able to multi-task, prioritize and able to quickly change priorities. Good time management. Good to have knowledge of cloud platforms (cloud security) and familiarity with Terraform or other infrastructure-as-code tools. About Epsilon: Epsilon is a global data, technology and services company that powers the marketing and advertising ecosystem. For decades, we have provided marketers from the world's leading brands the data, technology and services they need to engage consumers with 1 View, 1 Vision and 1 Voice. 1 View of their universe of potential buyers. 1 Vision for engaging each individual. And 1 Voice to harmonize engagement across paid, owned and earned channels. Epsilon's comprehensive portfolio of capabilities across our suite of digital media, messaging and loyalty solutions bridge the divide between marketing and advertising technology. We process 400+ billion consumer actions each day using advanced AI and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon has been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Epsilon is a global company with more than 9,000 employees around the world.

Posted 6 days ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Chennai

Work from Office

• Experience in cloud-based systems (GCP, BigQuery) • Strong SQL programming skills. • Expertise in database programming and performance tuning techniques • Possess knowledge of data warehouse architectures, ETL, reporting/analytic tools,

Posted 6 days ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

You are invited to join our team as a Mid-Level Data Engineer Technical Consultant with 4+ years of experience. As a part of our diverse and inclusive organization, you will be based in Bangalore, KA, working full-time in a permanent position during the general shift from Monday to Friday. In this role, you will be expected to possess strong written and oral communication skills, particularly in email correspondence. Your experience in working with Application Development teams will be invaluable, along with your ability to analyze and solve problems effectively. Proficiency in Microsoft tools such as Outlook, Excel, and Word is essential for this position. As a Data Engineer Technical Consultant, you must have at least 4 years of hands-on experience in development. Your expertise should include working with Snowflake and Pyspark, writing SQL queries, utilizing Airflow, and developing in Python. Experience with DBT and integration programs will be advantageous, as well as familiarity with Excel for data analysis and Unix Scripting language. Your responsibilities will encompass a good understanding of data warehousing and practical work experience in this field. You will be accountable for various tasks including understanding requirements, coding, unit testing, integration testing, performance testing, UAT, and Hypercare Support. Collaboration with cross-functional teams across different geographies will be a key aspect of this role. If you are action-oriented, independent, and possess the required technical skills, we encourage you to submit your resume to pallavi@she-jobs.com and explore this exciting opportunity further.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Techno functional professional with over 5 years of experience in Data warehousing and BI, you should have a strong grasp of fundamental concepts in this domain. Your role will involve designing BI solutions from scratch and implementing Agile Scrum practices like story slicing, grooming, daily scrum, iteration planning, retrospective, test-driven, and model storming. Additionally, you must possess expertise in Data Governance and Management along with a track record of proposing and implementing BI solutions successfully. Your technical skills should include proficiency in SQL for data analysis and querying, as well as experience with Postgres DB. It is mandatory to have a functional background in Finance/Banking, particularly in Asset finance, Equipment finance, or Leasing. Excellent communication skills, both written and verbal, are essential for interacting with a diverse set of stakeholders. You should also be adept at raising alerts and risks when necessary and collaborating effectively with team members across different locations. In terms of responsibilities, you will be required to elicit business needs and requirements, develop functional specifications, and ensure clarity by engaging with stakeholders. Your role will also involve gathering and analyzing information from various sources to determine system changes needed for new projects and application enhancements. Providing functional analysis, specification documentation, and validating business requirements will be critical aspects of your work. As part of solutioning, you will be responsible for designing and developing business intelligence and data warehousing solutions. This includes creating data transformations and reports/visualizations based on business needs. Your role will also involve proposing solutions and enhancements to improve the quality of deliverables and overall solutions.,

Posted 6 days ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Coimbatore

Work from Office

Position Name: Data Engineer Location: Coimbatore (Hybrid 3 days per week) Work Shift Timing: 1.30 pm to 10.30 pm (IST) Mandatory Skills: SCALA, Spark, Python, Data bricks Good to have: Java & Hadoop The Role: Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements: Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala). Hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Solid understanding of batch and streaming data processing techniques. Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion. Expert-level ability to write complex, optimized SQL queries across extensive data volumes. Experience on HDFS, Nifi, Kafka. Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB Familiarity with Agile methodologies. Obsession for service observability, instrumentation, monitoring, and alerting. Knowledge or experience in architectural best practices for building data lakes. Interested candidates share your resume at Neesha1@damcogroup.com along with the below mentioned details : Total Exp : Relevant Exp in Scala & Spark : Current CTC: Expected CTC: Notice period : Current Location:

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Pune

Remote

Role & responsibilities Key Responsibilities: At least 5 years of experience in data engineering with a strong background on Azure Databricks and Scala/Python. Databricks with knowledge in Pyspark Database: Oracle or any other database Programming: Python with awareness of Streamlit

Posted 1 week ago

Apply

6.0 - 11.0 years

25 - 35 Lacs

Gurugram, Chennai, Bengaluru

Hybrid

Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED. Contract To Hire(C2H) Role Location: Bengaluru,IN; Gurgaon,IN; Chennai,IN Payroll: BCforward Work Mode: Hybrid JD GCP; PySpark; ETL - Big Data / Data Warehousing; SQL; Python Experienced data engineer with hands on experience on GCP offerings Experienced in BigQuery/ BigTable/ Pyspark Worked on prior data engineering projects leveraging GCP product offerings Strong SQL background Prior Bigdata experience Please share your Updated Resume, PAN card soft copy, Passport size Photo & UAN History. Interested applicants can share updated resume to g.sreekanth@bcforward.com Note: Looking for Immediate to 30-Days joiners at most. All the best

Posted 1 week ago

Apply

4.0 - 9.0 years

11 - 17 Lacs

Bengaluru

Work from Office

Greetings from TSIT Digital !! This is with regard to an excellent opportunity with us and if you have that unique and unlimited passion for building world-class enterprise software products that turn into actionable intelligence, then we have the right opportunity for you and your career. This is an opportunity for Permanent Employment with TSIT Digital. What are we looking for: Data Engineer Experience: 4+ Year's Relevant Experience 2-5 Years Location:Bangalore Notice period: Immediately to 15 days Job Description: Work location-Manyata Tech Park, Bengaluru, Karnataka, India Work mode- Hybrid Model Client- Lowes Mandatory Skills: Data Engineer Scala/Python, SQL,Scripting Knowledge on BigQuery, Pyspark, Airflow,Serverless Cloud Native Service, Kafka Streaming If you are interested please share your updated CV:- kousalya.v@tsit.co.in

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

As a GCP Data Engineer with Tableau expertise, you will be responsible for designing, implementing, and maintaining data pipelines on Google Cloud Platform (GCP) to support various data analytics initiatives. Your role will involve working closely with stakeholders to understand their data requirements, developing scalable solutions to extract, transform, and load data from different sources into GCP, and ensuring the integrity and quality of the data. In this role, you will leverage your expertise in GCP services such as BigQuery, Dataflow, Pub/Sub, and Data Studio to build and optimize data pipelines for efficient data processing and analysis. You will also be required to create visualizations and dashboards using Tableau to present insights derived from the data to business users. The ideal candidate for this position should have a strong background in data engineering, with hands-on experience in building and optimizing data pipelines on GCP. Proficiency in SQL, Python, or Java for data processing and transformation is essential. Additionally, experience with Tableau for creating interactive visualizations and dashboards is highly preferred. If you are a data engineering professional with expertise in GCP and Tableau and are passionate about leveraging data to drive business decisions, this role offers an exciting opportunity to contribute to the success of data-driven initiatives within the organization.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

As a Data Analyst with expertise in Market Research and Web Scraping, you will be responsible for analyzing large datasets to uncover trends and insights related to market dynamics and competitor performance. Your role will involve conducting thorough market research to track competitor activities, identify emerging trends, and understand customer preferences. Additionally, you will design and implement data scraping solutions to extract competitor data from various online sources while ensuring compliance with legal standards and website terms of service. Your key responsibilities will include developing dashboards, reports, and visualizations to communicate key insights effectively to stakeholders. You will collaborate with cross-functional teams to align data-driven insights with company objectives and support strategic decision-making in product development and marketing strategies. Furthermore, you will be involved in database management, data cleaning, and maintaining organized databases with accurate and consistent information for easy access and retrieval. To excel in this role, you should have a Bachelor's degree in Data Science, Computer Science, Statistics, Business Analytics, or a related field. Advanced degrees or certifications in data analytics or market research will be advantageous. Proficiency in SQL, Python, or R for data analysis, along with experience in data visualization tools like Tableau, Power BI, or D3.js, is essential. Strong analytical skills, the ability to interpret data effectively, and knowledge of statistical analysis techniques are key requirements for this position. Experience with data scraping tools such as BeautifulSoup, Scrapy, or Selenium, as well as familiarity with web analytics and SEO tools like Google Analytics or SEMrush, will be beneficial. Preferred skills include experience with e-commerce data analysis, knowledge of retail or consumer behavior analytics, and an understanding of machine learning techniques for data classification and prediction. Ethical data scraping practices and adherence to data privacy laws are essential considerations for this role. If you meet these qualifications and are excited about the opportunity to work in a dynamic environment where your analytical skills and market research expertise will be valued, we encourage you to apply by sending your updated resume along with your current salary details to jobs@glansolutions.com. For any inquiries, feel free to contact Satish at 8802749743 or visit our website at www.glansolutions.com to explore more job opportunities. Join us at Glan Solutions and leverage your data analysis skills to drive strategic decisions and contribute to our success in the fashion/garment/apparel industry! Note: This job was posted on 14th November 2024.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As an Azure Data Engineer, you will be responsible for designing, implementing, and maintaining data pipelines that enable data analytics and machine learning solutions on the Azure platform. You will work closely with data scientists, analysts, and other stakeholders to understand their data requirements and develop efficient data processing solutions. Your primary focus will be on building and optimizing data pipelines using Azure data services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure HDInsight. You will also be responsible for integrating data from various sources, ensuring data quality and consistency, and implementing data security and compliance measures. In this role, you will leverage your expertise in SQL, Python, and other programming languages to transform and analyze large volumes of data. You will also collaborate with cross-functional teams to troubleshoot data issues, optimize performance, and implement best practices for data management and governance. The ideal candidate for this position has a strong background in data engineering, experience working with cloud-based data technologies, and a passion for driving insights from data. Strong communication skills, problem-solving abilities, and the ability to work in a fast-paced environment are also essential for success in this role.,

Posted 1 week ago

Apply

1.0 - 4.0 years

3 - 7 Lacs

Bengaluru

Work from Office

GLOINNT is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Design, develop, and maintain data infrastructure, databases, and data pipelinesDevelop and implement ETL processes to extract, transform, and load data from various sourcesEnsure data accuracy, quality, and accessibility, and resolve data-related issuesCollaborate with data analysts, data scientists, and other stakeholders to understand data needs and requirementsDevelop and maintain data models and data dictionariesDesign and implement data warehousing solutions to enable efficient and effective data analysis and reportingImplement and manage data security and access controls to protect data privacy and confidentialityStrong understanding of data architecture, data modeling, ETL processes, and data warehousingExcellent communication and collaboration skills

Posted 1 week ago

Apply

4.0 - 9.0 years

0 - 0 Lacs

Hyderabad, Chennai

Hybrid

Job Description: Design, develop, and maintain data pipelines and ETL processes using AWS and Snowflake. Implement data transformation workflows using DBT (Data Build Tool). Write efficient, reusable, and reliable code in Python. Optimize and tune data solutions for performance and scalability. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Ensure data quality and integrity through rigorous testing and validation. Stay updated with the latest industry trends and technologies in data engineering. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Proven experience as a Data Engineer or similar role. Strong proficiency in AWS and Snowflake. Expertise in DBT and Python programming. Experience with data modeling, ETL processes, and data warehousing. Familiarity with cloud platforms and services. Excellent problem-solving skills and attention to detail. Strong communication and teamwork abilities.

Posted 1 week ago

Apply

5.0 - 10.0 years

18 - 33 Lacs

Noida

Work from Office

Senior Data Engineer Experience: 5+yrs Location: Noida 5 days Work from Office Shift:1pm to 10pm Job Summary We are seeking a highly skilled Senior Data Engineer / BI Developer with deep expertise in SQL Server database development and performance tuning, along with experience in ETL pipelines (SSIS), cloud-based data engineering (Azure Databricks), and data visualization (Power BI/Sigma). This role is critical in designing, optimizing, and maintaining enterprise grade data solutions that power analytics and business intelligence across the organization. Key Responsibilities Design, develop, and optimize SQL Server databases in Azure Cloud, including schema design, indexing strategies, and stored procedures. Perform advanced SQL performance tuning, query optimization, and troubleshooting of slow-running queries. Develop and maintain SSIS packages for complex ETL workflows, including error handling and logging. Build scalable data pipelines in Azure Databricks. Create and maintain Power BI and Sigma dashboards, ensuring data accuracy, usability, and performance. Implement and enforce data governance, security, and compliance best practices. Collaborate with cross-functional teams including data analysts, data scientists, and business stakeholders. Participate in code reviews, data modeling, and architecture planning for new and existing systems. Experience with backup and recovery strategies, high availability, and disaster recovery Required Skills & Experience 5 to 8 years of hands-on experience with Microsoft SQL Server (2016/2022 or later). - Strong expertise in T-SQL, stored procedures, functions, views, indexing, and query optimization. Proven experience with SSIS for ETL development and deployment. Experience with Azure Databricks, Spark, and Delta Lake for big data processing. Proficiency in Power BI and/or Sigma for data visualization and reporting. Solid understanding of data warehousing, star/snowflake schemas, and dimensional modeling. Familiarity with CI/CD pipelines, Git, and DevOps for data. Senior Data Engineer / BI Developer (SQL Server & Cloud Analytics) Strong communication and documentation skills. Preferred Qualifications Experience with Azure Data Factory, Synapse Analytics, or Azure SQL Database. Knowledge of NoSQL databases (e.g., MongoDB, Cosmos DB) is a plus. Familiarity with data lake architecture and cloud storage (e.g., ADLS Gen2). Experience in agile environments and working with JIRA or Azure DevOps

Posted 1 week ago

Apply

2.0 - 6.0 years

12 - 24 Lacs

Jaipur

Work from Office

Responsibilities: * Develop data pipelines using PySpark and SQL. * Collaborate with cross-functional teams on ML projects. * Optimize database performance through data modeling and visualization.

Posted 1 week ago

Apply

4.0 - 8.0 years

25 - 30 Lacs

Pune

Hybrid

Hi, Greetings!!! Role : Data Engineer Experience : 4+ years Location: Pune (Hybrid 3 days in office per week) Work Model : Hybrid Mail Skills: Data Engineer with Java, ETL, Apache, SQL Key Responsibilities: Design, implement, and optimize ETL/ELT pipelines using DBT for data modeling and transformation. Develop backend components and data processing logic using Java. Build and maintain DAGs in Apache Airflow for orchestration and automation of data workflows. Ensure the reliability, scalability, and efficiency of data pipelines for ingestion, transformation, and storage. Work with cross-functional teams to understand data needs and deliver high-quality solutions. Troubleshoot and resolve data pipeline issues in production environments. Apply data quality and governance best practices, including validation, logging, and monitoring. Collaborate on CI/CD deployment pipelines for data infrastructure. Required Skills & Qualifications: 4+ years of hands-on experience in data engineering roles. Strong experience with DBT for modular, testable, and version-controlled data transformation. Proficient in Java, especially for building custom data connectors or processing frameworks. Deep understanding of Apache Airflow and ability to design and manage complex DAGs. Solid SQL skills and familiarity with data warehouse platforms (e.g., Snowflake, Redshift, Big Query). Familiarity with version control tools (Git), CI/CD pipelines, and Agile methodologies. Exposure to cloud environments like AWS, GCP, or Azure.

Posted 1 week ago

Apply

Exploring Data Engineer Jobs in India

The data engineer job market in India is rapidly growing as organizations across various industries are increasingly relying on data-driven insights to make informed decisions. Data engineers play a crucial role in designing, building, and maintaining data pipelines to ensure that data is accessible, reliable, and secure for analysis.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi/NCR
  4. Hyderabad
  5. Pune

Average Salary Range

The average salary range for data engineer professionals in India varies based on experience and location. Entry-level data engineers can expect to earn anywhere between INR 4-6 lakhs per annum, while experienced professionals with 5+ years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

The typical career progression for a data engineer in India may include roles such as Junior Data Engineer, Data Engineer, Senior Data Engineer, Lead Data Engineer, and eventually Chief Data Engineer. As professionals gain more experience and expertise in handling complex data infrastructure, they may move into management roles such as Data Engineering Manager.

Related Skills

In addition to strong technical skills in data engineering, professionals in this field are often expected to have knowledge of programming languages such as Python, SQL, and Java. Familiarity with cloud platforms like AWS, GCP, or Azure, as well as proficiency in data warehousing technologies, is also beneficial for data engineers.

Interview Questions

  • What is the difference between ETL and ELT? (medium)
  • Explain the CAP theorem and its implications in distributed systems. (advanced)
  • How would you optimize a data pipeline for performance and scalability? (medium)
  • What is your experience with data modeling and schema design? (basic)
  • Describe a time when you had to troubleshoot a data pipeline failure. (medium)
  • How do you ensure data quality and consistency in a data pipeline? (basic)
  • Can you explain the concept of data partitioning in distributed databases? (medium)
  • What are the benefits of using columnar storage for analytical workloads? (medium)
  • How would you handle data security and privacy concerns in a data engineering project? (medium)
  • What tools and technologies have you worked with for data processing and transformation? (basic)
  • Explain the difference between batch processing and stream processing. (basic)
  • How do you stay updated with the latest trends in data engineering and technology? (basic)
  • Describe a challenging data engineering project you worked on and how you overcame obstacles. (medium)
  • What is your experience with data orchestration tools like Apache Airflow or Apache NiFi? (medium)
  • How would you design a data pipeline for real-time analytics? (advanced)
  • What is your approach to optimizing data storage and retrieval for cost efficiency? (medium)
  • Can you explain the concept of data lineage and its importance in data governance? (medium)
  • How do you handle schema evolution in a data warehouse environment? (medium)
  • Describe a time when you had to collaborate with cross-functional teams on a data project. (basic)
  • What are some common challenges you have faced in data engineering projects, and how did you address them? (medium)
  • How would you troubleshoot a slow-performing SQL query in a data warehouse? (medium)
  • What are your thoughts on the future of data engineering and its impact on business operations? (basic)
  • Explain the process of data ingestion and its role in data pipelines. (basic)
  • How do you ensure data integrity and consistency across distributed systems? (medium)
  • Describe a data migration project you worked on and the challenges you encountered. (medium)

Closing Remark

As you explore data engineer jobs in India, remember to showcase your technical skills, problem-solving abilities, and experience in handling large-scale data projects during interviews. Stay updated with the latest trends in data engineering and continuously upskill to stand out in this competitive job market. Prepare thoroughly, apply confidently, and seize the opportunities that come your way!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies