Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 - 8.0 years
10 - 15 Lacs
Hyderabad
Hybrid
Mega Walkin Drive for Lead Software Engineer/Sr Software Engineer- Data Engineer -Python & Hadoop Your future duties and responsibilities: Job Overview: CGI is looking for a talented and motivated Data Engineer with strong expertise in Python, Apache Spark, HDFS, and MongoDB to build and manage scalable, efficient, and reliable data pipelines and infrastructure Youll play a key role in transforming raw data into actionable insights, working closely with data scientists, analysts, and business teams. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Python and Spark. Ingest, process, and transform large datasets from various sources into usable formats. Manage and optimize data storage using HDFS and MongoDB. Ensure high availability and performance of data infrastructure. Implement data quality checks, validations, and monitoring processes. Collaborate with cross-functional teams to understand data needs and deliver solutions. Write reusable and maintainable code with strong documentation practices. Optimize performance of data workflows and troubleshoot bottlenecks. Maintain data governance, privacy, and security best practices. Required qualifications to be successful in this role: Minimum 6 years of experience as a Data Engineer or similar role. Strong proficiency in Python for data manipulation and pipeline development. Hands-on experience with Apache Spark for large-scale data processing. Experience with HDFS and distributed data storage systems. Strong understanding of data architecture, data modeling, and performance tuning. Familiarity with version control tools like Git. Experience with workflow orchestration tools (e.g., Airflow, Luigi) is a plus. Knowledge of cloud services (AWS, GCP, or Azure) is preferred. Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Preferred Skills: Experience with containerization (Docker, Kubernetes). Knowledge of real-time data streaming tools like Kafka. Familiarity with data visualization tools (e.g., Power BI, Tableau). Exposure to Agile/Scrum methodologies. Skills: Hadoop Hive Python SQL English Notice Period- 0-45 Days Pre requisites : Aadhar Card a copy, PAN card copy, UAN Disclaimer : The selected candidates will initially be required to work from the office for 8 weeks before transitioning to a hybrid model with 2 days of work from the office each week.
Posted 1 week ago
6.0 - 8.0 years
7 - 17 Lacs
Hyderabad
Work from Office
Lead Analyst/Senior Software Engineer - Data Engineer with Python, Apache Spark, HDFS Job Overview : CGI is looking for a talented and motivated Data Engineer with strong expertise in Python, Apache Spark, HDFS, and MongoDB to build and manage scalable, efficient, and reliable data pipelines and infrastructure Youll play a key role in transforming raw data into actionable insights, working closely with data scientists, analysts, and business teams. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Python and Spark. Ingest, process, and transform large datasets from various sources into usable formats. Manage and optimize data storage using HDFS and MongoDB. Ensure high availability and performance of data infrastructure. Implement data quality checks, validations, and monitoring processes. Collaborate with cross-functional teams to understand data needs and deliver solutions. Write reusable and maintainable code with strong documentation practices. Optimize performance of data workflows and troubleshoot bottlenecks. Maintain data governance, privacy, and security best practices. Required qualifications to be successful in this role: Minimum 6 years of experience as a Data Engineer or similar role. Strong proficiency in Python for data manipulation and pipeline development. Hands-on experience with Apache Spark for large-scale data processing. Experience with HDFS and distributed data storage systems. Strong understanding of data architecture, data modeling, and performance tuning. Familiarity with version control tools like Git. Experience with workflow orchestration tools (e.g., Airflow, Luigi) is a plus. Knowledge of cloud services (AWS, GCP, or Azure) is preferred. Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Preferred Skills: Experience with containerization (Docker, Kubernetes). Knowledge of real-time data streaming tools like Kafka. Familiarity with data visualization tools (e.g., Power BI, Tableau). Exposure to Agile/Scrum methodologies. Skills: Hadoop Hive Python SQL English Note This role will require- 8 weeks of in-office work after joining, after which we will transition to a hybrid working model, with 2 days per week in the office. Mode of interview F2F Time : Registration Window -9am to 12.30 pm. Candidates who are shortlisted will be required to stay throughout the day for subsequent rounds of interviews Notice Period: 0-45 Days
Posted 2 weeks ago
5 - 7 years
7 - 9 Lacs
Pune
Work from Office
Role & responsibilities P&ID, HFD, Layout, and Equipment Outline Design: Lead the creation of detailed P&IDs, HFDs, layouts, and equipment outlines for complex industrial processes. Ensure that these engineering documents are precise, comprehensive, and compliant with industry codes and standards. Extensive experience in the design and development of P&IDs, HFDs, layouts, and equipment outlines, especially for complex industrial processes. Professional Engineer (PE) license preferred. Proficiency in P&ID software, computer-aided design (CAD) tools, and process simulation software. Strong knowledge of industry codes and standards (e.g., ASME, API, ISA). Preferred candidate profile Should have exp. Of making P&IDs, HFD, Layout & Equipment outline drawings ITI / Diploma
Posted 3 months ago
4 - 9 years
6 - 15 Lacs
Bengaluru
Work from Office
Job Purpose and Impact As a Data Engineer at Cargill you work across the full stack to design, develop and operate high performance and data centric solutions using our comprehensive and modern data capabilities and platforms. You will play a critical role in enabling analytical insights and process efficiencies for Cargills diverse and complex business environments. You will work in a small team that shares your passion for building innovative, resilient, and high quality solutions while sharing, learning and growing together. Key Accountabilities Collaborate with business stakeholders, product owners and across your team on product or solution designs. Develop robust, scalable and sustainable data products or solutions utilizing cloud based technologies. Provide moderately complex technical support through all phases of product or solution life cycle. Perform data analysis, handle data modeling and configure and develop data pipelines to move and optimize data assets. Build moderately complex prototypes to test new concepts and provide ideas on reusable frameworks, components and data products or solutions and help promote adoption of new technologies. Independently solve moderately complex issues with minimal supervision, while escalating more complex issues to appropriate staff. Other duties as assigned Qualifications MINIMUM QUALIFICATIONS Bachelors degree in a related field or equivalent experience Minimum of two years of related work experience Other minimum qualifications may apply PREFERRED QUALIFCATIONS Experience developing modern data architectures, including data warehouses, data lakes, data meshes, hubs and associated capabilities including ingestion, governance, modeling, observability and more. Experience with data collection and ingestion capabilities, including AWS Glue, Kafka Connect and others. Experience with data storage and management of large, heterogenous datasets, including formats, structures, and cataloging with such tools as Iceberg, Parquet, Avro, ORC, S3, HFDS, HIVE, Kudu or others. Experience with transformation and modeling tools, including SQL based transformation frameworks, orchestration and quality frameworks including dbt, Apache Nifi, Talend, AWS Glue, Airflow, Dagster, Great Expectations, Oozie and others Experience working in Big Data environments including tools such as Hadoop and Spark Experience working in Cloud Platforms including AWS, GCP or Azure Experience of streaming and stream integration or middleware platforms, tools, and architectures such as Kafka, Flink, JMS, or Kinesis. Strong programming knowledge of SQL, Python, R, Java, Scala or equivalent Proficiency in engineering tooling including docker, git, and container orchestration services Strong experience of working in devops models with demonstratable understanding of associated best practices for code management, continuous integration, and deployment strategies. Experience and knowledge of data governance considerations including quality, privacy, security associated implications for data product development and consumption
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2