Jobs
Interviews

CBNsense

3 Job openings at CBNsense
Data Engineer – Python / PySpark pune,maharashtra,india 3 - 10 years None Not disclosed On-site Contractual

Job Title: Data Engineer – Python / PySpark Number of Position: 4 Experience: 3 to 10 years Location: Pune (Client - USA) Work Model: Full-time, hybrid work, 3 days/week in office(Client location.) Job Type: Contract-to-Hire Client Domain: Utilities / Energy About the Role We are seeking a highly skilled Data Engineer with hands-on expertise in Python, PySpark and BigData. The ideal candidate will have experience working on large-scale data processing, transformation, and analytics solutions within the utilities or related domains. You will collaborate closely with business stakeholders, solution architects, and data analysts to design and deliver efficient, scalable, and high-quality data pipelines. Key Responsibilities Design, build, and maintain scalable ETL pipelines using Python and PySpark. Ensure data quality, reliability, and governance across systems and pipelines. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Perform performance tuning and troubleshooting of big data applications. Work with large datasets from multiple sources (structured/unstructured) and prepare them for analytics and reporting. Follow best practices in coding, testing, and deployment for enterprise-grade data applications. Required Skills & Qualifications 3–10 years of professional experience in data engineering, preferably in utility, energy, or similar industries. Strong proficiency in Python programming. Hands-on experience with PySpark for big data processing. Good understanding and working knowledge of Palantir Foundry (pipelines, ontology, data modeling, transforms, code repositories, data integration and transformation). Experience with SQL and handling large datasets. Familiarity with data governance, data security, and compliance requirements in enterprise environments. Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Nice-to-Have Skills Experience with AWS, Azure, or GCP cloud data services. Knowledge of utilities domain data models and workflows. Exposure to DevOps / CI-CD pipelines for data engineering solutions. Knowledge of visualization tools like Tableau, Power BI, or Palantir dashboards. Experience in Palantir Foundry. Why Join Us Work on cutting-edge Palantir Foundry-based solutions in the utilities sector. Be part of a dynamic, collaborative, and innovation-driven team. Opportunity to grow your technical expertise across modern data platforms.

Data Engineer – Python / PySpark hyderabad,telangana,india 3 - 10 years None Not disclosed On-site Full Time

Job Title : Data Engineer – Python / PySpark Number of Position: 10 Experience : 3 to 10 years Location : Hyderabad (Client - USA) Work Model: Full-time, In-Office. This is an offsite opportunity supporting a US client. The role requires you to work from our Hyderabad client location daily. Job Type: Contract-to-Hire Client Domain : Utilities / Energy About the Role We are seeking a highly skilled Data Engineer with hands-on expertise in Python, PySpark, and Palantir Foundry. The ideal candidate will have experience working on large-scale data processing, transformation, and analytics solutions within the utilities or related domains. You will collaborate closely with business stakeholders, solution architects, and data analysts to design and deliver efficient, scalable, and high-quality data pipelines. Key Responsibilities Design, build, and maintain scalable ETL pipelines using Python and PySpark. Ensure data quality, reliability, and governance across systems and pipelines. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Perform performance tuning and troubleshooting of big data applications. Work with large datasets from multiple sources (structured/unstructured) and prepare them for analytics and reporting. Follow best practices in coding, testing, and deployment for enterprise-grade data applications. Required Skills & Qualifications 3–10 years of professional experience in data engineering, preferably in utility, energy, or similar industries. Strong proficiency in Python programming. Hands-on experience with PySpark for big data processing. Good understanding and working knowledge of Palantir Foundry (pipelines, ontology, data modeling, transforms, code repositories, data integration and transformation). Experience with SQL and handling large datasets. Familiarity with data governance, data security, and compliance requirements in enterprise environments. Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Nice-to-Have Skills Experience with AWS , Azure , or GCP cloud data services. Knowledge of utilities domain data models and workflows. Exposure to DevOps / CI-CD pipelines for data engineering solutions. Knowledge of visualization tools like Tableau , Power BI , or Palantir dashboards. Experience in Palantir Foundry. Why Join Us Work on cutting-edge Palantir Foundry-based solutions in the utilities sector. Be part of a dynamic, collaborative, and innovation-driven team. Opportunity to grow your technical expertise across modern data platforms.

Data Engineer hyderabad,telangana 3 - 10 years INR Not disclosed On-site Full Time

As a Data Engineer specializing in Python and PySpark, you will be responsible for designing, building, and maintaining scalable ETL pipelines. Your key responsibilities will include: - Designing, building, and maintaining scalable ETL pipelines using Python and PySpark. - Ensuring data quality, reliability, and governance across systems and pipelines. - Collaborating with cross-functional teams to understand business requirements and translate them into technical solutions. - Performing performance tuning and troubleshooting of big data applications. - Working with large datasets from multiple sources and preparing them for analytics and reporting. - Following best practices in coding, testing, and deployment for enterprise-grade data applications. To qualify for this role, you should have: - 3-10 years of professional experience in data engineering, preferably in utility, energy, or related industries. - Strong proficiency in Python programming. - Hands-on experience with PySpark for big data processing. - Good understanding and working knowledge of Palantir Foundry. - Experience with SQL and handling large datasets. - Familiarity with data governance, data security, and compliance requirements in enterprise environments. - Strong problem-solving and analytical skills. - Excellent communication and collaboration skills. Nice-to-have skills include experience with cloud data services (AWS, Azure, GCP), knowledge of utilities domain data models and workflows, exposure to DevOps/CI-CD pipelines, familiarity with visualization tools like Tableau, Power BI, or Palantir dashboards, and experience in Palantir Foundry. In this role, you will have the opportunity to work on cutting-edge Palantir Foundry-based solutions in the utilities sector, be part of a dynamic, collaborative, and innovation-driven team, and grow your technical expertise across modern data platforms.,