Bengaluru, Karnataka, India
Not disclosed
On-site
Full Time
We are looking for an immediate joiner and experienced Big Data Developer with a strong background in Kafka, PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 5 years of experience and be ready to join immediately. This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Responsibilities Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python/Scala, and Spark. Work extensively with the Kafka and Hadoop ecosystem, including HDFS, Hive, and other related technologies. Write efficient SQL queries for data extraction, transformation, and analysis. Implement and manage Kafka streams for real-time data processing. Utilize scheduling tools to automate data workflows and processes. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Ensure data quality and integrity by implementing robust data validation processes. Optimize existing data processes for performance and scalability. Requirements Experience with GCP. Knowledge of data warehousing concepts and best practices. Familiarity with machine learning and data analysis tools. Understanding of data governance and compliance standards. This job was posted by Arun Kumar K from krtrimaIQ Cognitive Solutions. Show more Show less
India
INR 10.0 - 17.0 Lacs P.A.
On-site
Full Time
We are looking for an Only immediate joiner and e*xperienced Big Data Developer with a strong background in PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 4 years of experience and be ready to join immediately.* This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Key Responsibilities: Design, develop, and optimize large-scale data processing pipelines using PySpark. Work with various Apache tools and frameworks (like Hadoop, Hive, HDFS, etc.) to ingest, transform, and manage large datasets. Ensure high performance and reliability of ETL jobs in production. Collaborate with Data Scientists, Analysts, and other stakeholders to understand data needs and deliver robust data solutions. Implement data quality checks and data lineage tracking for transparency and auditability. Work on data ingestion, transformation, and integration from multiple structured and unstructured sources. Leverage Apache NiFi for automated and repeatable data flow management (if applicable). Write clean, efficient, and maintainable code in Python and Java. Contribute to architectural decisions, performance tuning, and scalability planning. Required Skills: 5–7 years of experience. Strong hands-on experience with PySpark for distributed data processing. Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.). Solid grasp of data warehousing, ETL principles, and data modeling. Experience working with large-scale datasets and performance optimization. Familiarity with SQL and NoSQL databases. Proficiency in Python and basic to intermediate knowledge of Java. Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills: Working experience with Apache NiFi for data flow orchestration. Experience in building real-time streaming data pipelines. Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Self-driven with the ability to work independently and as part of a team. Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,700,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Basavanagudi, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you ready to join within 15 days? What is your Current CTC ? Experience: Python: 4 years (Preferred) Pyspark: 4 years (Required) Data warehouse: 4 years (Required) Work Location: In person Application Deadline: 12/06/2025
Bengaluru, Karnataka, India
Not disclosed
On-site
Full Time
We are looking for an immediate joiner and experienced Big Data Developer with a strong background in Kafka, PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 5 years of experience and be ready to join immediately. This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Responsibilities Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python/Scala, and Spark. Work extensively with the Kafka and Hadoop ecosystem, including HDFS, Hive, and other related technologies. Write efficient SQL queries for data extraction, transformation, and analysis. Implement and manage Kafka streams for real-time data processing. Utilize scheduling tools to automate data workflows and processes. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Ensure data quality and integrity by implementing robust data validation processes. Optimize existing data processes for performance and scalability. Requirements Experience with GCP. Knowledge of data warehousing concepts and best practices. Familiarity with machine learning and data analysis tools. Understanding of data governance and compliance standards. This job was posted by Arun Kumar K from krtrimaIQ Cognitive Solutions. Show more Show less
Basavanagudi, Bengaluru, Karnataka
INR Not disclosed
On-site
Full Time
We are looking for an Only immediate joiner and e*xperienced Big Data Developer with a strong background in PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 4 years of experience and be ready to join immediately.* This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Key Responsibilities: Design, develop, and optimize large-scale data processing pipelines using PySpark. Work with various Apache tools and frameworks (like Hadoop, Hive, HDFS, etc.) to ingest, transform, and manage large datasets. Ensure high performance and reliability of ETL jobs in production. Collaborate with Data Scientists, Analysts, and other stakeholders to understand data needs and deliver robust data solutions. Implement data quality checks and data lineage tracking for transparency and auditability. Work on data ingestion, transformation, and integration from multiple structured and unstructured sources. Leverage Apache NiFi for automated and repeatable data flow management (if applicable). Write clean, efficient, and maintainable code in Python and Java. Contribute to architectural decisions, performance tuning, and scalability planning. Required Skills: 5–7 years of experience. Strong hands-on experience with PySpark for distributed data processing. Deep understanding of Apache ecosystem (Hadoop, Hive, Spark, HDFS, etc.). Solid grasp of data warehousing, ETL principles, and data modeling. Experience working with large-scale datasets and performance optimization. Familiarity with SQL and NoSQL databases. Proficiency in Python and basic to intermediate knowledge of Java. Experience in using version control tools like Git and CI/CD pipelines. Nice-to-Have Skills: Working experience with Apache NiFi for data flow orchestration. Experience in building real-time streaming data pipelines. Knowledge of cloud platforms like AWS, Azure, or GCP. Familiarity with containerization tools like Docker or orchestration tools like Kubernetes. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Self-driven with the ability to work independently and as part of a team. Education: Bachelor’s or Master’s degree in Computer Science, Information Systems, or a related field. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,700,000.00 per year Benefits: Health insurance Schedule: Day shift Supplemental Pay: Performance bonus Yearly bonus Ability to commute/relocate: Basavanagudi, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Are you ready to join within 15 days? What is your Current CTC ? Experience: Python: 4 years (Preferred) Pyspark: 4 years (Required) Data warehouse: 4 years (Required) Work Location: In person Application Deadline: 12/06/2025
Bengaluru, Karnataka, India
Not disclosed
On-site
Full Time
We are looking for an immediate joiner and experienced Big Data Developer with a strong background in Kafka, PySpark, Python/Scala, Spark, SQL, and the Hadoop ecosystem. The ideal candidate should have over 5 years of experience and be ready to join immediately. This role requires hands-on expertise in big data technologies and the ability to design and implement robust data processing solutions. Responsibilities Design, develop, and maintain scalable data processing pipelines using Kafka, PySpark, Python/Scala, and Spark. Work extensively with the Kafka and Hadoop ecosystem, including HDFS, Hive, and other related technologies. Write efficient SQL queries for data extraction, transformation, and analysis. Implement and manage Kafka streams for real-time data processing. Utilize scheduling tools to automate data workflows and processes. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions. Ensure data quality and integrity by implementing robust data validation processes. Optimize existing data processes for performance and scalability. Requirements Experience with GCP. Knowledge of data warehousing concepts and best practices. Familiarity with machine learning and data analysis tools. Understanding of data governance and compliance standards. This job was posted by Arun Kumar K from krtrimaIQ Cognitive Solutions. Show more Show less
My Connections krtrimaIQ Cognitive Solutions
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.