Posted:1 week ago| Platform: Indeed logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

We are looking for an Only immediate joiner and e*xperienced Big Data Developer with a strong background in PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 4 years of experience and be ready to join immediately.* This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Key Responsibilities: Design, develop, and optimize large-scale data processing pipelines using PySpark. Work with various Apache tools and frameworks (like Hadoop, Hive, HDFS, etc.) to ingest, transform, and manage large datasets. Ensure high performance and reliability of ETL jobs in production. Collaborate with Data Scientists, Analysts, and other stakeholders to understand data needs and deliver robust data solutions. Implement data quality checks and data lineage tracking for transparency and auditability. Work on data ingestion, transformation, and integration from multiple structured and unstructured sources. Leverage Apache NiFi for automated and repeatable data flow management (if applicable). Write clean, efficient, and maintainable code in Python and Java. Contribute to architectural decisions, performance tuning, and scalability planning. Required Skills: 5–7 years of experience. Strong hands-on experience with PySpark for distributed data processing. Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.). Solid grasp of data warehousing, ETL principles, and data modeling. Experience working with large-scale datasets and performance optimization. Familiarity with SQL and NoSQL databases. Proficiency in Python and basic to intermediate knowledge of Java. Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills: Working experience with Apache NiFi for data flow orchestration. Experience in building real-time streaming data pipelines. Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Self-driven with the ability to work independently and as part of a team. Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,700,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Basavanagudi, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you ready to join within 15 days? What is your Current CTC ? Experience: Python: 4 years (Preferred) Pyspark: 4 years (Required) Data warehouse: 4 years (Required) Work Location: In person Application Deadline: 12/06/2025

Mock Interview

Practice Video Interview with JobPe AI

Start Data Interview Now

My Connections krtrimaIQ Cognitive Solutions

Download Chrome Extension (See your connection in the krtrimaIQ Cognitive Solutions )

chrome image
Download Now

RecommendedJobs for You

Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru

Hyderabad, Chennai, Bengaluru

Chennai, Tamil Nadu, India