Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4.0 - 9.0 years
3 - 7 Lacs
Pune
Work from Office
Req ID: 324609 We are currently seeking a Data Engineer to join our team in Pune, Mahrshtra (IN-MH), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred
Posted 6 days ago
4.0 - 9.0 years
3 - 7 Lacs
Pune
Work from Office
Req ID: 324653 We are currently seeking a Data Engineer to join our team in Pune, Mahrshtra (IN-MH), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred
Posted 6 days ago
4.0 - 9.0 years
3 - 7 Lacs
Chennai
Work from Office
Req ID: 324631 We are currently seeking a Data Engineer to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred
Posted 6 days ago
4.0 - 9.0 years
3 - 7 Lacs
Chennai
Work from Office
Req ID: 324632 We are currently seeking a Data Engineer to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred
Posted 6 days ago
8.0 - 13.0 years
35 - 50 Lacs
Mumbai
Work from Office
Hiring Big Data Lead with 8+ years experience for US Shift time: Must Have: - Big Data: Spark, Hadoop, Kafka, Hive, Flink - Backend: Python, Scala - NoSQL: MongoDB, Cassandra - Cloud: AWS/AZURE/GCP, Snowflake, Databricks - Docker, Kubernetes, CI/CD Required Candidate profile - Excellent in Mentoring/ Training in Big Data- HDFS, YARN, Airflow, Hive, Mapreduce, Hbase, Kafka & ETL/ELT, real-time streaming, data modeling - Immediate Joiner is plus - Excellent in Communication
Posted 1 week ago
8 - 13 years
25 - 30 Lacs
Chennai, Pune, Delhi
Work from Office
Join phData, a dynamic and innovative leader in the modern data stack We partner with major cloud data platforms like Snowflake, AWS, Azure, GCP, Fivetran, Pinecone, Glean and dbt to deliver cutting-edge services and solutions We're committed to helping global enterprises overcome their toughest data challenges phData is a remote-first global company with employees based in the United States, Latin America and India We celebrate the culture of each of our team members and foster a community of technological curiosity, ownership and trust Even though we're growing extremely fast, we maintain a casual, exciting work environment We hire top performers and allow you the autonomy to deliver results 5x Snowflake Partner of the Year (2020, 2021, 2022, 2023, 2024) Fivetran, dbt, Atlation, Matillion Partner of the Year #1 Partner in Snowflake Advanced Certifications 600+ Expert Cloud Certifications (Sigma, AWS, Azure, Dataiku, etc) Recognized as an award-winning workplace in US, India and LATAM Required Experience: At least 4+ years experience as a Software Engineer, Data Engineer or Data Analyst Ability to develop end-to-end technical solutions into production ? and to help ensure performance, security, scalability, and robust data integration Programming expertise in Java, Python and/or Scala Core cloud data platforms including Snowflake, AWS, Azure, Databricks and GCP SQL and the ability to write, debug, and optimize SQL queries Client-facing written and verbal communication skills and experience Create and deliver detailed presentations Detailed solution documentation (e g including POCS and roadmaps, sequence diagrams, class hierarchies, logical system views, etc ) 4-year Bachelor's degree in Computer Science or a related field Prefer any of the following: Production experience in core data platforms: Snowflake, AWS, Azure, GCP, Hadoop, Databricks Cloud and Distributed Data Storage: S3, ADLS, HDFS, GCS, Kudu, ElasticSearch/Solr, Cassandra or other NoSQL storage systems Data integration technologies: Spark, Kafka, event/streaming, Streamsets, Matillion, Fivetran, NiFi, AWS Data Migration Services, Azure DataFactory, Informatica Intelligent Cloud Services (IICS), Google DataProc or other data integration technologies Multiple data sources (e g queues, relational databases, files, search, API) complete software development lifecycle experience including design, documentation, implementation, testing, and deployment Automated data transformation and data curation: dbt, Spark, Spark streaming, automated pipelines Workflow Management and Orchestration: Airflow, AWS Managed Airflow, Luigi, NiFi Why phDataWe Offer: Remote-First Workplace Medical Insurance for Self & Family Medical Insurance for Parents Term Life & Personal Accident Wellness Allowance Broadband Reimbursement Continuous learning and growth opportunities to enhance your skills and expertise Other benefits include paid certifications, professional development allowance, and bonuses for creating for company-approved content phData celebrates diversity and is committed to creating an inclusive environment for all employees Our approach helps us to build a winning team that represents a variety of backgrounds, perspectives, and abilities So, regardless of how your diversity expresses itself, you can find a home here at phData We are proud to be an equal opportunity employer We prohibit discrimination and harassment of any kind based on race, color, religion, national origin, sex (including pregnancy), sexual orientation, gender identity, gender expression, age, veteran status, genetic information, disability, or other applicable legally protected characteristics If you would like to request an accommodation due to a disability, please contact us at People Operations
Posted 3 months ago
5 - 10 years
10 - 20 Lacs
Chennai
Work from Office
Min 5-8 years of experience in Hadoop/big data technologies. Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr). Hands-on experience with Python/Pyspark. Design, develop, and optimize ETL pipelines using Python and PySpark to process and transform large-scale datasets, ensuring performance and scalability on big data platforms. Implement big data solutions for Retail banking use cases such Risk analysis, Management Reporting (time series, Vintage curves, Executive summary) and regulatory reporting, while maintaining data accuracy and compliance standards. Collaborate with cross-functional teams to integrate data from various sources, troubleshoot production issues, and ensure efficient, reliable data processing operations.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2