Posted:3 weeks ago|
Platform:
Work from Office
Full Time
About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title : Big Data Engineer (Spark, Scala) Experience : 4 to 10 years Key Responsibilities : Data Engineering : Design, develop, and maintain large-scale distributed data processing pipelines and data solutions using Apache Spark with Scala and/or Python . Data Integration : Work on integrating various data sources (batch and real-time) from both structured and unstructured data formats into big data platforms like Hadoop , AWS EMR , or Azure HDInsight . Performance Optimization : Optimize Spark jobs for better performance, managing large datasets and ensuring efficient resource usage. Architecture Design : Participate in the design and implementation of data pipelines and data lakes that support analytics, reporting, and machine learning applications. Collaboration : Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and ensure solutions are aligned with business goals. Big Data Tools : Implement and manage big data technologies like Hadoop , Kafka , HBase , Hive , Presto , etc. Automation : Automate repetitive tasks using scripting and monitoring solutions for continuous data pipeline management. Troubleshooting : Identify and troubleshoot data pipeline issues and ensure data integrity. Cloud Platforms : Work with cloud-based services and platforms like AWS , Azure , or Google Cloud for data storage, compute, and deployment. Code Quality : Ensure high code quality by following best practices, code reviews, and implementing unit and integration tests. Technical Skills : Experience : 6-9 years of hands-on experience in Big Data Engineering with a focus on Apache Spark (preferably with Scala and/or Python ). Languages : Proficiency in Scala and/or Python for building scalable data processing applications. Knowledge of Java is a plus. Big Data Frameworks : Strong experience with Apache Spark , Hadoop , Hive , HBase , Kafka , and other big data tools. Data Processing : Strong understanding of batch and real-time data processing and workflows. Cloud Experience : Proficient in cloud platforms such as AWS , Azure , or Google Cloud Platform for deploying and managing big data solutions. SQL/NoSQL : Experience working with SQL and NoSQL databases, particularly Hive , HBase , or Cassandra . Data Integration : Strong skills in integrating and processing diverse data sources, including working with data lakes and data warehouses. Performance Tuning : Hands-on experience in performance tuning and optimization of Spark jobs and jobs running on Hadoop clusters. Data Pipelines : Strong background in designing, building, and maintaining robust data pipelines for large-scale data processing. Version Control : Familiarity with Git or other version control systems. DevOps & Automation : Knowledge of automation tools and CI/CD pipelines for data workflows (Jenkins, Docker, Kubernetes). Analytical Skills : Strong problem-solving skills and a deep understanding of data modeling, data structures, and algorithms. Notice period : Immediate joiners Location : Pune Mode of Work :WFO(Work From Office) Thanks & Regards, SWETHA Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,INDIA. Contact Number:8067432433 rathy@blackwhite.in |www.blackwhite.in
Black And White Business Solutions
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
3.0 - 8.0 Lacs P.A.