Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 7.0 years
13 - 15 Lacs
Pune
Work from Office
About us: We are building a modern, scalable, fully automated on-premise data platform , designed to handle complex data workflows, including data ingestion, ETL processes, physics-based calculations and machine learning predictions. Orchestrated using Dagster , our platform integrates with multiple data sources, edge devices, and storage systems. A core principle of our architecture is self-service : granting data scientists, analysts, and engineers granular control over the entire journey of their data assets as well empowering teams to modify and extend their data pipelines with minimal friction. We're looking for a hands-on Data Engineer to help develop, maintain, and optimize this platform. Role & responsibilities: - Design, develop, and maintain robust data pipelines using Dagster for orchestration - Build and manage ETL pipelines with python and SQL - Optimize performance and reliability of the platform within on-premise infrastructure constraints - Develop solutions for processing and aggregating data on edge devices , including data filtering, compression, and secure transmission - Maintain metadata, data lineage, ensure data quality, consistency, and compliance with governance and security policies - Implement CI/CD workflows of the platform on a local Kubernetes cluster - Architect the platform with a self-service mindset , including clear abstractions, reusable components, and documentation - Develop in collaboration with data scientists, analysts, and frontend developers to understand evolving data needs - Define and maintain clear contracts/interfaces with source systems , ensuring resilience to upstream changes Preferred candidate profile: -5-7 years of experience in database-driven projects or related fields. -1-2 years of experience with data platforms, orchestration, and big data management. -Proven experience as a Data Engineer or similar role, with focus on backend data processing and infrastructure -Hands-on experience with Dagster or similar data orchestration tools (e.g., Airflow, Prefect, Luigi, Databricks) - Proficiency with SQL and Python - Strong understanding of data modeling , ETL/ELT best practices, and batch/stream processing - Familiarity with on-premises deployments and challenges (e.g., network latency, storage constraints, resource management) - Experience with version control (Git) and CI/CD practices for data workflows - Understanding of data governance , access control , and data cataloging
Posted 2 weeks ago
6 - 11 years
2 - 2 Lacs
Chennai, Bengaluru, Hyderabad
Work from Office
We are looking for a skilled Data Lead to design, implement, and manage data pipelines and real-time data processing solutions. The ideal candidate will have hands-on experience with cloud platforms, data technologies, and tools like Snowflake, Apache Airflow, Kafka, and real-time streaming technologies. Key Responsibilities : Build and manage scalable data pipelines and real-time streaming data solutions. Work with cross-functional teams to ensure data is accessible for analytics and business intelligence. Optimize data workflows for high performance and reliability. Implement cloud-based solutions using AWS, Azure, or GCP. Lead and mentor a team of data engineers, ensuring best practices are followed. Troubleshoot data pipeline issues and improve data system performance. Must-Have Skills : Programming : Python (preferred), Java, or Scala. Cloud : AWS (preferred), Azure, or GCP. Data Warehousing : Snowflake. Data Orchestration : Apache Airflow (preferred), Prefect, or Dagster. Messaging/Streaming : Kafka (preferred), AWS SQS, or Google Cloud Pub/Sub. Real-Time Processing : Apache Flink (preferred), Apache Spark Streaming, or Kafka Streams.
Posted 3 months ago
4 - 9 years
6 - 15 Lacs
Bengaluru
Work from Office
Job Purpose and Impact As a Data Engineer at Cargill you work across the full stack to design, develop and operate high performance and data centric solutions using our comprehensive and modern data capabilities and platforms. You will play a critical role in enabling analytical insights and process efficiencies for Cargills diverse and complex business environments. You will work in a small team that shares your passion for building innovative, resilient, and high quality solutions while sharing, learning and growing together. Key Accountabilities Collaborate with business stakeholders, product owners and across your team on product or solution designs. Develop robust, scalable and sustainable data products or solutions utilizing cloud based technologies. Provide moderately complex technical support through all phases of product or solution life cycle. Perform data analysis, handle data modeling and configure and develop data pipelines to move and optimize data assets. Build moderately complex prototypes to test new concepts and provide ideas on reusable frameworks, components and data products or solutions and help promote adoption of new technologies. Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff. Other duties as assigned Qualifications MINIMUM QUALIFICATIONS Bachelors degree in a related field or equivalent experience Minimum of two years of related work experience Other minimum qualifications may apply PREFERRED QUALIFCATIONS Experience developing modern data architectures, including data warehouses, data lakes, data meshes, hubs and associated capabilities including ingestion, governance, modeling, observability and more. Experience with data collection and ingestion capabilities, including AWS Glue, Kafka Connect and others. Experience with data storage and management of large, heterogenous datasets, including formats, structures, and cataloging with such tools as Iceberg, Parquet, Avro, ORC, S3, HFDS, HIVE, Kudu or others. Experience with transformation and modeling tools, including SQL based transformation frameworks, orchestration and quality frameworks including dbt, Apache Nifi, Talend, AWS Glue, Airflow, Dagster, Great Expectations, Oozie and others Experience working in Big Data environments including tools such as Hadoop and Spark Experience working in Cloud Platforms including AWS, GCP or Azure Experience of streaming and stream integration or middleware platforms, tools, and architectures such as Kafka, Flink, JMS, or Kinesis. Strong programming knowledge of SQL, Python, R, Java, Scala or equivalent Proficiency in engineering tooling including docker, git, and container orchestration services Strong experience of working in devops models with demonstratable understanding of associated best practices for code management, continuous integration, and deployment strategies. Experience and knowledge of data governance considerations including quality, privacy, security associated implications for data product development and consumption
Posted 3 months ago
5 - 10 years
12 - 22 Lacs
Bengaluru
Hybrid
Greetings from Tech Mahindra! With reference to your profile on Naukri portal, we are contacting you to share a better job opportunity for the role of Data Engineer with our own organization, Tech Mahindra based. COMPANY PROFILE: Tech Mahindra is an Indian multinational information technology services and consulting company. Website: www.techmahindra.com We are looking for Data Engineer for our Organization. Job Details: Experience: 5 to 9 years Education: BE/BTECH, MCA Work timings: Normal Shift Location: Bangalore- No of days working: 03 Days Working Required Skills - - Must have - Hands on experience in Spark with SCALA or Spark with Java is a MUST . - Must have - Worked on Performance Tuning, DAG optimization, Memory Management, Streaming or Batch pipelines . - Must have - Used Orchestration frameworks (Airflow or Oozie or Nifi or Dagster) - Nice to have Working experience on Streaming processors ( Flink , Apache Storm ) Kindly share only interested candidates forward your updated resumes with below details at: PR00815335@TechMahindra.com Total years of experience: Relevant experience in Java : Relevant experience in Spark:- Relevant experience in Scala:- Relevant experience in (Airflow or Oozie or Nifi or Dagster) :- Offer amount (if holding any offer ) : Location of offer:- Reason for looking another offer:- Notice Period (if serving LWD) : Current location :- Preferred location : CTC: Exp CTC: When you are available for the interview? (Time/Date): How soon can you join? Best Regards, Praveena Rajappa Business Associate | RMG Tech Mahindra PR00815335@TechMahindra.com
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2