Jobs
Interviews

322 Data Ingestion Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Work from Office

We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelor s or Master s degree in Computer Science, Information Technology, Data Science, or a related field. 4+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must be strong in SQL Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.

Posted 1 month ago

Apply

5.0 - 8.0 years

2 - 5 Lacs

Chennai

Work from Office

Job Information Job Opening ID ZR_2168_JOB Date Opened 10/04/2024 Industry Technology Job Type Work Experience 5-8 years Job Title AWS Data Engineer City Chennai Province Tamil Nadu Country India Postal Code 600002 Number of Positions 4 Mandatory Skills: AWS, Python, SQL, spark, Airflow, SnowflakeResponsibilities Create and manage cloud resources in AWS Data ingestion from different data sources which exposes data using different technologies, such asRDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations Develop an infrastructure to collect, transform, combine and publish/distribute customer data. Define process improvement opportunities to optimize data collection, insights and displays. Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible Identify and interpret trends and patterns from complex data sets Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. Key participant in regular Scrum ceremonies with the agile teams Proficient at developing queries, writing reports and presenting findings Mentor junior members and bring best industry practices check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 1 month ago

Apply

5.0 - 8.0 years

2 - 6 Lacs

Mumbai

Work from Office

Job Information Job Opening ID ZR_1963_JOB Date Opened 17/05/2023 Industry Technology Job Type Work Experience 5-8 years Job Title Neo4j GraphDB Developer City Mumbai Province Maharashtra Country India Postal Code 400001 Number of Positions 5 Graph data Engineer required for a complex Supplier Chain Project. Key required Skills Graph data modelling (Experience with graph data models (LPG, RDF) and graph language (Cypher), exposure to various graph data modelling techniques) Experience with neo4j Aura, Optimizing complex queries. Experience with GCP stacks like BigQuery, GCS, Dataproc. Experience in PySpark, SparkSQL is desirable. Experience in exposing Graph data to visualisation tools such as Neo Dash, Tableau and PowerBI The Expertise You Have: Bachelors or Masters Degree in a technology related field (e.g. Engineering, Computer Science, etc.). Demonstrable experience in implementing data solutions in Graph DB space. Hands-on experience with graph databases (Neo4j(Preferred), or any other). Experience Tuning Graph databases. Understanding of graph data model paradigms (LPG, RDF) and graph language, hands-on experience with Cypher is required. Solid understanding of graph data modelling, graph schema development, graph data design. Relational databases experience, hands-on SQL experience is required. Desirable (Optional) skills: Data ingestion technologies (ETL/ELT), Messaging/Streaming Technologies (GCP data fusion, Kinesis/Kafka), API and in-memory technologies. Understanding of developing highly scalable distributed systems using Open-source technologies. Experience in Supply Chain Data is desirable but not essential. Location: Pune, Mumbai, Chennai, Bangalore, Hyderabad check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Hyderabad

Hybrid

Urgent Requirement for Grafana, Employment:C2H Notice Period:Immediate We are seeking a skilled Database Specialist with strong expertise in Time-Series Databases, specifically Loki for logs, InfluxDB, and Splunk for metrics. The ideal candidate will have a solid background in query languages, Grafana, Alert Manager, and Prometheus. This role involves managing and optimizing time-series databases, ensuring efficient data storage, retrieval, and visualization. Key Responsibilities: Design, implement, and maintain time-series databases using Loki, InfluxDB, and Splunk to store and manage high-velocity time-series data. Develop efficient data ingestion pipelines for time-series data from various sources (e.g., IoT devices, application logs, metrics). Optimize database performance for high write and read throughput, ensuring low latency and high availability. Implement and manage retention policies, downsampling, and data compression strategies to optimize storage and query performance. Collaborate with DevOps and infrastructure teams to deploy and scale time-series databases in cloud or on-premise environments. Build and maintain dashboards and visualization tools (e.g., Grafana) for monitoring and analyzing time-series data. Troubleshoot and resolve issues related to data ingestion, storage, and query performance. Work with development teams to integrate time-series databases into applications and services. Ensure data security, backup, and disaster recovery mechanisms are in place for time-series databases. Stay updated with the latest advancements in time-series database technologies and recommend improvements to existing systems. Key Skills: Strong expertise in Time-Series Databases with Loki (for logs), InfluxDB, and Splunk (for metrics).

Posted 1 month ago

Apply

8.0 - 13.0 years

30 - 45 Lacs

Bengaluru

Hybrid

Job Title: Enterprise Data Architect | Immediate Joiner Experience: 8 15 Years Location: Bengaluru (Onsite/Hybrid) Joining Time: Immediate Joiners Only (015 Days) Job Description We are looking for an experienced Enterprise Data Architect to join our dynamic team in Bengaluru. This is an exciting opportunity to shape modern data architecture across finance and colleague (HR) domains using the latest technologies and design patterns. Key Responsibilities Design and implement conceptual and logical data models for finance and colleague domains. Define complex as-is and to-be data architectures, including transition states. Develop and maintain data standards, principles, and architecture artifacts. Build scalable solutions using data lakes, data warehouses, and data governance platforms. Ensure data lineage, quality, and consistency across platforms. Translate business requirements into technical solutions for data acquisition, storage, transformation, and governance. Collaborate with cross-functional teams for data solution design and delivery Required Skills Strong communication and stakeholder engagement. Hands-on experience with Kimball dimensional modeling and/or Snowflake modeling. Expertise in modern cloud data platforms and architecture (AWS, Azure, or GCP). Proficient in building solutions for web, mobile, and tablet platforms. Background in Finance and/or Colleague Technology (HR systems) is a strong plus. Preferred Qualifications Bachelors/Masters degree in Computer Science, Engineering, or a related field. 8–15 years of experience in data architecture and solution design. Important Notes Immediate Joiners Only (Notice period max of 15 days) Do not apply if you’ve recently applied or are currently in the Xebia interview process Location: Bengaluru – candidates must be based in or open to relocating immediately To Apply Send your updated resume with the following details to vijay.s@xebia.com: Full Name: Total Experience: Current CTC: Expected CTC: Current Location: Preferred Location: Notice Period / Last Working Day (if serving notice): Primary Skill Set: LinkedIn URL: Apply now and be part of our exciting transformation journey at Xebia!

Posted 2 months ago

Apply

5.0 - 9.0 years

0 - 3 Lacs

Hyderabad, Pune, Chennai

Work from Office

Position : Azure Data Engineer Locations : Bangalore, Pune, Hyderabad, Chennai & Coimbatore Key skills Azure Data bricks, Azure Data Factory, Hadoop Relevant Exp : ADF, ADLF, Databricks- 4 Yrs Only Hadoop- 3.5 or 3 Yrs Experience - 5 Years Must-have skills: Cloud certified in one of these categories • Azure Data Engineer • Azure Data Factory , Azure Data bricks Spark (PySpark or scala), SQL, DATA Ingestion, Curation . Semantic Modelling/ Optimization of data model to work within Rahona • Experience in Azure ingestion from on-prem source, e.g. mainframe, SQL server, Oracle. • Experience in Sqoop / Hadoop • Microsoft Excel (for metadata files with requirements for ingestion) • Any other certificate in Azure/AWS/GCP and data engineering hands-on experience in cloud • Strong Programming skills with at least one of Python, Scala or Java • Strong SQL skills ( T-SQL or PL-SQL) • Data files movement via mailbox • Source-code versioning/promotion tools, e.g. Git/Jenkins • Orchestration tools, e.g. Autosys, Oozie • Source-code versioning with Git. Nice-to-have skills: Experience working with mainframe files • Experience in Agile environment, JIRA/Confluence tools.

Posted 2 months ago

Apply

5.0 - 9.0 years

8 - 14 Lacs

Kolkata

Work from Office

Key Responsibilities: Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation. SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis. Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language). Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure.

Posted 2 months ago

Apply

5.0 - 9.0 years

8 - 14 Lacs

Ludhiana

Work from Office

Key Responsibilities: Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation. SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis. Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language). Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure.

Posted 2 months ago

Apply

8.0 - 10.0 years

27 - 42 Lacs

Chennai

Work from Office

Job Summary: We are seeking a skilled and motivated Backend/Data Engineer with hands-on experience in MongoDB and Neo4j to design and implement data-driven applications. The ideal candidate will be responsible for building robust database systems, integrating complex graph and document-based data models, and collaborating with cross-functional teams. Experience - 6- 12 years Key Responsibilities: • Design, implement, and optimize document-based databases using MongoDB. • Model and manage connected data using Neo4j (Cypher query language). • Develop RESTful APIs and data services to serve and manipulate data stored in MongoDB and Neo4j. • Implement data pipelines for data ingestion, transformation, and storage. • Optimize database performance and ensure data integrity and security. • Collaborate with frontend developers, data scientists, and product managers. • Maintain documentation and support for database solutions. Required Skills: • Strong proficiency in MongoDB: schema design, indexing, aggregation framework. • Solid experience with Neo4j: graph modeling, Cypher queries, performance tuning. • Programming proficiency in Python, Node.js, or Java. • Familiarity with REST APIs, GraphQL, or gRPC. • Experience with data modeling (both document and graph models). • Knowledge of data security, backup, and recovery techniques. Preferred Skills: • Experience with Mongoose, Spring Data MongoDB, or Neo4j-OGM. • Familiarity with data visualization tools (e.g., Neo4j Bloom). • Experience with Docker, Kubernetes, or other DevOps tools. • Exposure to other databases (e.g., PostgreSQL, Redis).

Posted 2 months ago

Apply

4.0 - 9.0 years

16 - 27 Lacs

Hyderabad, Bengaluru

Work from Office

Preferred candidate profile Strong knowledge and experience with the Power BI ecosystem (Power BI Desktop, Power Query, DAX, Power BI Service, etc) Should be able to propose and Design the high performing Power BI data models to cater to various functional areas of client. Should be able to design and develop detailed Power BI reports, and use various visualizations to develop summary and detailed reports. Should be very good with DAX functions and could create complex DAX queries, and M Query. Should be able to provide quick solutions to the issues raised by business users. Should have proven capability in using various Power BI features like bookmarks, Drill through, query merge etc 4+ years of experience in data analysis developing business intelligence and reporting in a medium to large size corporate environment demonstrating detailed knowledge of BI functions such as analytics, data modeling & data mining, reporting, report conversion and data cleansing. Strong knowledge and experience in analyzing and designing data models to meet reporting requirements Experience designing and developing visually engaging and informative dashboards and KPIs (Key Performance Indicators) using Power BI, MS SSRS Experience with Microsoft Azure data analytics tools, such as Azure Data Factory, Azure Synapse, Azure Data Lake, DevOps Should be able to understand and document the requirements from business users. Should be able to optimize large datasets and reports built on top of Oracle Database. Should be able to suggest architectural changes to client to reduce the load on Power BI service. Should be aware of Power BI licensing mechanism and should be able to figure out optimal licensing strategy to reduce the cost for client. Should be good with Oracle queries and should be able to suggest database table optimizations for faster retrieval of records. Knowledge of Tableau will be an added advantage. Knowledge of the ETL process (informatica will be added advantage). Knowledge of SAP BW will be added advantage.

Posted 2 months ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Kolkata, Gurugram, Bengaluru

Work from Office

Job Opportunity for GCP Data Engineer Role: Data Engineer Location: Gurugram/ Bangalore/Kolkata (5 Days work from office) Experience : 4+ Years Key Skills: Data Analysis / Data Preparation - Expert Dataset Creation / Data Visualization - Expert Data Quality Management - Advanced Data Engineering - Advanced Programming / Scripting - Intermediate Data Storytelling- Intermediate Business Analysis / Requirements Analysis - Intermediate Data Dashboards - Foundation Business Intelligence Reporting - Foundation Database Systems - Foundation Agile Methodologies / Decision Support - Foundation Technical Skills: • Cloud - GCP - Expert • Database systems (SQL and NoSQL / Big Query / DBMS) - Expert • Data warehousing solutions - Advanced • ETL Tools - Advanced • Data APIs - Advanced • Python, Java, and Scala etc. - Intermediate • Some knowledge understanding the basics of distributed systems - Foundation • Some knowledge of algorithms and optimal data structures for analytics - Foundation • Soft Skills and time management skills - Foundation

Posted 2 months ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Pune, Delhi / NCR, Mumbai (All Areas)

Hybrid

Job Title: Data Engineer - Ingestion, Storage & Streaming (Confluent Kafka) Job Summary: As a Data Engineer specializing in Ingestion, Storage, and Streaming, you will design, implement, and maintain robust, scalable, and high-performance data pipelines for the efficient flow of data through our systems. You will work with Confluent Kafka to build real-time data streaming platforms, ensuring high availability and fault tolerance. You will also ensure that data is ingested, stored, and processed efficiently and in real-time to provide immediate insights. Key Responsibilities: Kafka-Based Streaming Solutions: Design, implement, and manage scalable and fault-tolerant data streaming platforms using Confluent Kafka. Develop real-time data streaming applications to support business-critical processes. Implement Kafka producers and consumers for ingesting data from various sources. Handle message brokering, processing, and event streaming within the platform. Ingestion & Data Integration: Build efficient data ingestion pipelines to bring real-time and batch data from various data sources into Kafka. Ensure smooth data integration across Kafka topics and handle multi-source data feeds. Develop and optimize connectors for data ingestion from diverse systems (e.g., databases, external APIs, cloud storage). Data Storage and Management: Manage and optimize data storage solutions in conjunction with Kafka, including topics, partitions, retention policies, and data compression. Work with distributed storage technologies to store large volumes of structured and unstructured data, ensuring accessibility and compliance. Implement strategies for schema management, data versioning, and data governance. Data Streaming & Processing: Leverage Kafka Streams and other stream processing frameworks (e.g., Apache Flink, ksqlDB) to process real-time data and provide immediate analytics. Build and optimize data processing pipelines to transform, filter, aggregate, and enrich streaming data. Monitoring, Optimization, and Security: Set up and manage monitoring tools to track the performance of Kafka clusters, ingestion, and streaming pipelines. Troubleshoot and resolve issues related to data flows, latency, and failures. Ensure data security and compliance by enforcing appropriate data access policies and encryption techniques. Collaboration and Documentation: Collaborate with data scientists, analysts, and other engineers to align data systems with business objectives. Document streaming architecture, pipeline workflows, and data governance processes to ensure system reliability and scalability. Provide regular updates on streaming and data ingestion pipeline performance and improvements to stakeholders. Required Skills & Qualifications: Experience: 3+ years of experience in data engineering, with a strong focus on Kafka, data streaming, ingestion, and storage solutions. Hands-on experience with Confluent Kafka, Kafka Streams, and related Kafka ecosystem tools. Experience with stream processing and real-time analytics frameworks (e.g., ksqlDB, Apache Flink). Technical Skills: Expertise in Kafka Connect, Kafka Streams, and Kafka producer/consumer APIs. Proficient in data ingestion and integration techniques from diverse sources (databases, APIs, etc.). Strong knowledge of cloud data storage and distributed systems. Experience with programming languages like Java, Scala, or Python for Kafka integration and stream processing. Familiarity with tools such as Apache Spark, Flink, Hadoop, or other data processing frameworks. Experience with containerization and orchestration tools such as Docker, Kubernetes.

Posted 2 months ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

Bengaluru, Bangalaore

Work from Office

ETL Data Engineer - Tech Lead Bangalore, India Information Technology 16748 Overview We are seeking a skilled and experienced Data Engineer who has expertise in playing a vital role in supporting data discovery, creating design document, data ingestion/migration, creating data pipelines, creating data marts and managing, monitoring the data using tech stack Azure, SQL, Python, PySpark, Airflow and Snowflake. Responsibilities 1. Data DiscoveryCollaborate with source teams and gather complete details of data sources and create design diagram. 2. Data Ingestion/MigrationCollaborate with cross-functional teams to Ingest/migrate data from various sources to staging area. Develop and implement efficient data migration strategies, ensuring data integrity and security throughout the process. 3. Data Pipeline DevelopmentDesign, develop, and maintain robust data pipelines that extract, transform, and load (ETL) data from different sources into GCP. Implement data quality checks and ensure scalability, reliability, and performance of the pipelines. 4. Data ManagementBuild and maintain data models and schemas, ensuring optimal storage, organization, and accessibility of data. Collaborate with requirement team to understand their data requirements and provide solutions by creating data marts to meet their needs. 5. Performance OptimizationIdentify and resolve performance bottlenecks within the data pipelines and data services. Optimize queries, job configurations, and data processing techniques to improve overall system efficiency. 6. Data Governance and SecurityImplement data governance policies, access controls, and data security measures to ensure compliance with regulatory requirements and protect sensitive data. Monitor and troubleshoot data-related issues, ensuring high availability and reliability of data systems. 7. Documentation and CollaborationCreate comprehensive technical documentation, including data flow diagrams, system architecture, and standard operating procedures. Collaborate with cross-functional teams, analysts, and software engineers, to understand their requirements and provide technical expertise. Requirements Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. - Proven experience as a Data Engineer Technical lead, with a focus on output driven. - Strong knowledge and hands-on experience with Azure, SQL, Python, PySpark, Airflow, Snowflake and related tools. - Proficiency in data processing and pipeline development. Solid understanding of data modeling, database design, and ETL principles. - Experience with data migration projects, including data extraction, transformation, and loading. - Familiarity with data governance, security, and compliance practices. - Strong problem-solving skills and ability to work in a fast-paced, collaborative environment. - Excellent communication and interpersonal skills, with the ability to articulate technical concepts to non-technical stakeholders.

Posted 2 months ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Back At BCE Global Tech, immerse yourself in exciting projects that are shaping the future of both consumer and enterprise telecommunications This involves building innovative mobile apps to enhance user experiences and enable seamless connectivity on-the-go Thrive in diverse roles like Full Stack Developer, Backend Developer, UI/UX Designer, DevOps Engineer, Cloud Engineer, Data Science Engineer, and Scrum Master; at a workplace that encourages you to freely share your bold and different ideas If you are passionate about technology and eager to make a difference, we want to hear from you! Apply now to join our dynamic team in Bengaluru ETL Data Stage Specialist Join our dynamic team as an ETL Data Stage Specialist In this role, you'll design, develop, and maintain robust ETL processes to ensure seamless data integration and transformation Your expertise will drive data quality, performance optimization, and innovation within our data infrastructure Be a key player in delivering accurate, timely, and valuable insights to support informed business decisions 5+ years experience as a ETL Developer using ETL tools 5+ years experience working with relational database 3 + exposure to any BI tools ( Power BI, microsurgery ) 3+ years experience working with high volume data ingestion Exposure to fourth generation programing language such as python Capable of working as an individual contributor and as part of an agile team member Motivated individual to drive ETL best practices Required Skills ETL Tools IBM DataStage an asset 4+ Yrs Good knowledge of relational database 4+ Yrs Knowldedge of public cloud 2+ Yrs Knowledge of BI tools 3+ Yrs Knowldege of 4gl programing languages 1+ Yrs Education Background Computer Science or Engineering Degree/Diploma or Equivalent Experience Working Timing 8:30 AM 17:00 PM EST What We Offer Competitive salaries and comprehensive health benefits Flexible work hours and remote work options Professional development and training opportunities A supportive and inclusive work environment

Posted 2 months ago

Apply

5.0 - 9.0 years

8 - 14 Lacs

Nagpur

Work from Office

Key Responsibilities: Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation. SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis. Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language). Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure.

Posted 2 months ago

Apply

5.0 - 9.0 years

8 - 14 Lacs

Bengaluru

Work from Office

Key Responsibilities: Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation. SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis. Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language). Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure.

Posted 2 months ago

Apply

5.0 - 9.0 years

8 - 14 Lacs

Lucknow

Work from Office

Key Responsibilities: Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation. SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis. Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language). Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure.

Posted 2 months ago

Apply

3.0 - 7.0 years

10 - 20 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Salary: 8 to 24 LPA Exp: 3 to 7 years Location: Gurgaon (Hybrid) Notice: Immediate to 30 days..!! Job Title: Senior Data Engineer Job Summary: We are looking for an experienced Senior Data Engineer with 5+ years of hands-on experience in cloud data engineering platforms, specifically AWS, Databricks, and Azure. The ideal candidate will play a critical role in designing, building, and maintaining scalable data pipelines and infrastructure to support our analytics and business intelligence initiatives. Key Responsibilities: Design, develop, and optimize scalable data pipelines using AWS services (e.g., S3, Glue, Redshift, Lambda). Build and maintain ETL/ELT workflows leveraging Databricks and Apache Spark for processing large datasets. Work extensively with Azure data services such as Azure Data Lake, Azure Synapse, Azure Data Factory, and Azure Databricks. Collaborate with data scientists, analysts, and stakeholders to understand data requirements and deliver high-quality data solutions. Ensure data quality, reliability, and security across multiple cloud platforms. Monitor and troubleshoot data pipelines, implement performance tuning, and optimize resource usage. Implement best practices for data governance, metadata management, and documentation. Stay current with emerging cloud data technologies and industry trends to recommend improvements. Required Qualifications: 5+ years of experience in data engineering with strong expertise in AWS , Databricks , and Azure cloud platforms. Hands-on experience with big data processing frameworks, particularly Apache Spark. Proficient in building complex ETL/ELT pipelines and managing data workflows. Strong programming skills in Python, Scala, or Java. Experience working with structured and unstructured data in cloud storage solutions. Knowledge of SQL and experience with relational and NoSQL databases. Familiarity with CI/CD pipelines and DevOps practices in cloud environments. Strong analytical and problem-solving skills with an ability to work independently and in teams. Preferred Skills: Experience with containerization and orchestration tools (Docker, Kubernetes). Familiarity with machine learning pipelines and tools. Knowledge of data modeling, data warehousing, and analytics architecture.

Posted 2 months ago

Apply

3.0 - 7.0 years

10 - 20 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Salary: 8 to 24 LPA Exp: 3 to 7 years Location: Gurgaon (Hybrid) Notice: Immediate to 30 days..!! Job Profile: Experienced Data Engineer with a strong foundation in designing, building, and maintaining scalable data pipelines and architectures. Skilled in transforming raw data into clean, structured formats for analytics and business intelligence. Proficient in modern data tools and technologies such as SQL, T-SQL, Python, Databricks, and cloud platforms (Azure). Adept at data wrangling, modeling, ETL/ELT development, and ensuring data quality, integrity, and security. Collaborative team player with a track record of enabling data-driven decision-making across business units. As a Data engineer, Candidate will work on the assignments for one of our Utilities clients. Collaborating with cross-functional teams and stakeholders involves gathering data requirements, aligning business goals, and translating them into scalable data solutions. The role includes working closely with data analysts, scientists, and business users to understand needs, designing robust data pipelines, and ensuring data is accessible, reliable, and well-documented. Regular communication, iterative feedback, and joint problem-solving are key to delivering high-impact, data-driven outcomes that support organizational objectives. This position requires a proven track record of transforming processes, driving customer value, cost savings with experience in running end-to-end analytics for large-scale organizations. Design, build, and maintain scalable data pipelines to support analytics, reporting, and advanced modeling needs. Collaborate with consultants, analysts, and clients to understand data requirements and translate them into effective data solutions. Ensure data accuracy, quality, and integrity through validation, cleansing, and transformation processes. Develop and optimize data models, ETL workflows, and database architectures across cloud and on-premises environments. Support data-driven decision-making by delivering reliable, well-structured datasets and enabling self-service analytics. Provides seamless integration with cloud platforms (Azure), making it easy to build and deploy end-to-end data pipelines in the cloud Scalable clusters for handling large datasets and complex computations in Databricks, optimizing performance and cost management. Must to have Client Engagement Experience and collaboration with cross-functional teams Data Engineering background in Databricks Capable of working effectively as an individual contributor or in collaborative team environments Effective communication and thought leadership with proven record. Candidate Profile: Bachelors/masters degree in economics, mathematics, computer science/engineering, operations research or related analytics areas 3+ years’ experience must be in Data engineering. Hands on experience on SQL, Python, Databricks, cloud Platform like Azure etc. Prior experience in managing and delivering end to end projects Outstanding written and verbal communication skills Able to work in fast pace continuously evolving environment and ready to take up uphill challenges Is able to understand cross cultural differences and can work with clients across the globe.

Posted 2 months ago

Apply

5.0 - 9.0 years

8 - 14 Lacs

Jaipur

Work from Office

Key Responsibilities: Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation. SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis. Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language). Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure.

Posted 2 months ago

Apply

5.0 - 9.0 years

8 - 14 Lacs

Surat

Work from Office

Key Responsibilities: Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation. SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis. Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language). Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure.

Posted 2 months ago

Apply

5.0 - 9.0 years

8 - 14 Lacs

Chandigarh

Work from Office

Key Responsibilities: Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation. SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis. Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language). Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure.

Posted 2 months ago

Apply

5.0 - 9.0 years

8 - 14 Lacs

Hyderabad

Work from Office

Key Responsibilities: Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation. SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis. Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language). Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure.

Posted 2 months ago

Apply

5.0 - 9.0 years

8 - 14 Lacs

Coimbatore

Work from Office

Key Responsibilities: Splunk ITSI Implementation: Develop and configure IT Service Intelligence (ITSI) modules, including KPI creation, service trees, and notable event aggregation. SIEM Development: Design, implement, and optimize Splunk SIEM solutions for threat detection, security monitoring, and log analysis. Dashboard & Visualization: Create advanced dashboards, reports, and visualizations using Splunk SPL (Search Processing Language). Data Ingestion & Parsing: Develop data onboarding, parsing, and field extractions from various log sources, including cloud and on-prem infrastructure.

Posted 2 months ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Data Services Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with teams, making team decisions, and providing solutions to problems for your immediate team and across multiple teams. You will engage with multiple teams and contribute to key decisions, ensuring the successful performance of your team and delivering high-quality applications. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Manage and prioritize application development tasks Ensure applications meet business process and application requirements Perform code reviews and provide feedback to team members Professional & Technical Skills: Must To Have Skills:Proficiency in Microsoft Azure Data Services Experience with cloud-based application development Strong understanding of data storage and management in Azure Hands-on experience with Azure data services such as Azure SQL Database, Azure Cosmos DB, and Azure Data Lake Storage Experience with data integration and ETL processes in Azure Additional Information: The candidate should have a minimum of 5 years of experience in Microsoft Azure Data Services This position is based at our Bengaluru office A 15 years full-time education is required Qualifications 15 years full time education

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies