Jobs
Interviews

23 Deltalake Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

12 - 22 Lacs

hyderabad

Hybrid

We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer. In this role you will: Understanding of Spark core concepts like RDDs, DataFrames, DataSets, SparkSQL and Spark Streaming. Experience with Spark optimization techniques. Deep knowledge of Delta Lake features like time travel, schema evolution, data partitioning. Ability to design and implement data pipelines using Spark and Delta Lake as the data storage layer. Proficiency in Python/Scala/Java for Spark development and integrate with ETL process. Knowledge of data ingestion techniques from various sources (flat files, CSV, API, database) Understanding of data quality best practices and data validation techniques. Qualifications - External To be successful in this role, you should meet the following requirements: Understanding of data warehouse concepts, data modelling techniques. Expertise in Git for code management. Familiarity with CI/CD pipelines and containerization technologies. Nice to have: Experience using data integration tools like DataStage/Prophecy/Informatica/Ab Initio.

Posted 1 week ago

Apply

8.0 - 12.0 years

25 - 40 Lacs

hyderabad

Work from Office

Key Responsibilities: Design and develop the migration strategies and processes Collaborate with stakeholders to understand business requirements and technical challenges. Analyze current data and scope for optimization during the migration process. Define the architecture and roadmap for cloud-based data and analytics transformation on Databricks. Design, implement, and optimize scalable, high-performance data architectures using Databricks. Build and manage data pipelines and workflows within Databricks. Ensure that best practices for security, scalability, and performance are followed. Implement Databricks solutions that enable machine learning, business intelligence, and data science workloads. Oversee the technical aspects of the migration process, from planning through to execution. Work closely with engineering and data teams to ensure proper migration of ETL processes, data models, and analytics workloads. Troubleshoot and resolve issues related to migration, data quality, and performance. Create documentation of the architecture, migration processes, and solutions. Provide training and support to teams post-migration to ensure they can leverage Databricks. Experience: 7+ years of experience in data engineering, cloud architecture, or related fields. 3+ years of hands-on experience with Databricks, including the implementation of data engineering solutions, migration projects, and optimizing workloads. Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their integration with Databricks. Experience in end-to-end data migration projects involving large-scale data infrastructure. Familiarity with ETL tools, data lakes, and data warehousing solutions. Skills: Expertise in Databricks architecture and best practices for data processing. Strong knowledge of Spark, Delta Lake, DLT, Lakehouse architecture, and other latest Databricks components. Proficiency in Databricks Asset Bundles Expertise in design and development of migration frameworks using Databricks Proficiency in Python, Scala, SQL, or similar languages for data engineering tasks. Familiarity with data governance, security, and compliance in cloud environments. Solid understanding of cloud-native data solutions and services.

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

hyderabad, telangana, india

On-site

Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Sr.Associate Director, Software Engineering Provide the technical expertise for Risk Data Platform and the various software components that supplement it for its transformation & uplift. Implement standards around development, DevSecOps, orchestration, segregation & containerization Act as a technical expert on the design and implementation of the technology solutions to meet the needs of the Data & Enterprise reporting function on a tactical and strategic basis Accountable for ensuring compliance of the products and services with mandatory and regulatory requirements, control objectives in the risk and control framework and technical currency (in line with published standards and guidelines) and, with the architecture function, implementation of the business imperatives. The role holder must work with the IT communities of practice to maximize automation, increase efficiency and ensure that best practice, and the latest tools, techniques and processes have been adopted Requirements To be successful in this role, you should meet the following requirements: Must have experience in CI/CD - Ansible / Jenkins Must have experience in operating a container orchestration cluster (Kubernetes, Docker) Proficient knowledge of integration of Spark framework & Deltalake Must have knowledge on working on distribute compute platform like - Spark/Hadoop/Trino etc Must have experience in Python/Pyspark Must have knowledge on code review, code optimization and enforcing best in class coding standards Must have experience in multi-tenant application/platform Must have knowledge on access management, segregation of duties, and change management process DevSec Ops Preferred knowledge on Apache eco system like - Spark, Airflow Preferred Experience in any database (Postgres) Experience with UNIX & Spark UI Experience with zookeeper or similar orchestration. Experience in using CI/CD automation tools (Git, Jenkins, Configuration deployment tools ( Puppet/Chef/Ansible) Significant experience with Linux operating system environments. Proficient understanding of code versioning tools Git. Understanding of accessibility and security compliance. Knowledge of user authentication and authorization between multiple systems, servers, and environments Strong unit test , integration test and debugging skill Excellent problem-solving, Log Analysis and troubleshooting skills Experience with infrastructure scripting solutions such as Python/Shell scripting. Excellent problem-solving, Log Analysis and troubleshooting skills using SPLUNK & Acceldata Experience in scheduling tool - ControlM Experience in log monitoring tool - SPLUNK Experience in Vault - HashiCopr Expertise in Python Coding You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSBC Software Development India

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Software Developer at our organization, you will be responsible for independently designing components, developing code, and testing case scenarios using Talend on Big Data while adhering to relevant software craftsmanship principles and meeting acceptance criteria. Your role will also involve completing the assigned learning path, participating in team ceremonies such as agile practices, and delivering on all aspects of the Software Development Lifecycle (SDLC) in-line with Agile and IT craftsmanship principles. You will be expected to deliver high-quality clean code and designs that can be re-used, collaborate with other development teams to define and implement data pipelines, and ensure timely communication with counterparts, stakeholders, and partners. Additionally, you will assess production improvement areas, manage and address production loads, and provide suggestions for automating repetitive production activities. Your responsibilities will also include performing bug-free release validations, producing metrics, tests, and defect reports, assisting in developing guidelines, and increasing the coverage of data models, data dictionary, data pipeline standards, and the storage of source, process, and consumer metadata. Strong communication skills and an understanding of the Agile/Scrum development cycle are essential for this role. To be successful in this position, you should have 4 to 5 years of experience in Databricks, Data Factory, ADF, ADB, Hive, PySpark, SparkSQL, DataLake, DataLakeHouse, deltalake, Azure SQL, Logic Apps, KeyValut, Log Analytics and Metrics, ETL & ELT concepts. Hands-on experience with Azure DevOps and an understanding of build and release pipelines will be advantageous. You should be able to extract data from source systems using Data Factory pipelines and workflows in Azure Data Bricks, and have knowledge of error handling and root cause analysis. Furthermore, you are expected to standardize integration and migration flows, develop scalable and reusable frameworks for ingesting and enhancing datasets, possess good analytical and troubleshooting skills, ensure data quality and accuracy through testing and validation, and work effectively in a team with a cross-cultural environment. Effective verbal and written communication skills are crucial for collaborating with all counterparties. If you are looking for a stimulating and caring environment where you can make a positive impact on the future and grow both personally and professionally, we welcome you to join our team. At our organization, we value diversity and inclusion, and we believe that everyone's initiatives play a significant role in shaping the world of tomorrow.,

Posted 1 month ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Datawarehouse Database Architect - Immediate hiring. We are currently looking for Datawarehouse Database Architect for our client who are into Fintech solutions. Please let us know your interest and availability Experience: 10 plus years of experience Locations: Hybrid Any Accion offices in India pref (Bangalore /Pune/Mumbai) Notice Period: Immediate – 0 – 15 days joiners are preferred Required skills: Tools & Technologies Cloud Platform : Azure (Data Bricks, DevOps, Data factory, azure synapse Analytics, Azure SQL, blob storage, Databricks Delta Lake) Languages : Python/PL/SQL/SQL/C/C++/Java Databases : Snowflake/ MS SQL Server/Oracle Design Tools : Erwin & MS Visio. Data warehouse tools : SSIS, SSRS, SSAS. Power Bi, DBT, Talend Stitch, PowerApps, Informatica 9, Cognos 8, OBIEE. Any cloud exp is good to have Let’s connect for more details. Please write to me at mary.priscilina@accionlabs.com along with your cv and with the best contact details to get connected for a quick discussion. Regards, Mary Priscilina

Posted 1 month ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

Thoucentric, the Consulting arm of Xoriant, a leading digital engineering services company with 5000 employees, is currently seeking a highly experienced Infrastructure Architect with deep DevOps expertise to lead the cloud infrastructure planning and deployment for a global o9 supply chain platform rollout. As the Infrastructure Architect, you will play a crucial role in designing and implementing cloud-native architecture for the o9 platform rollout, ensuring robust, scalable, and future-proof infrastructure across Azure and/or AWS environments. Your responsibilities will include collaborating closely with o9's DevOps team to deploy and manage Kubernetes clusters, overseeing Git-based CI/CD workflows, implementing monitoring and alerting frameworks, and acting as a strategic liaison between the client's IT organization and o9's DevOps team to align platform requirements and deployment timelines. Additionally, you will be responsible for ensuring high availability, low latency, and high throughput to support supply chain operations needs, while anticipating future growth and scalability requirements. To be successful in this role, you should have at least 10 years of experience in Infrastructure Architecture/DevOps, ideally within CPG or large enterprise SaaS/Supply Chain platforms. You must have proven expertise in deploying and scaling platforms on Azure and/or AWS, along with hands-on experience with Kubernetes, Karpenter, Git-based CI/CD, DataLake/DeltaLake architectures, enterprise security, identity and access management, and networking in cloud environments. Experience with infrastructure as code tools like Terraform, Helm, or similar is also required, along with excellent stakeholder management and collaboration skills. Joining Thoucentric's consulting team will offer you the opportunity to define your career path independently, work with Fortune 500 companies and startups, and be part of a dynamic yet supportive working environment that encourages personal development. Additionally, you will have the chance to bond beyond work with your colleagues through sports, get-togethers, and other common interests, contributing to a very enriching environment with an Open Culture, Flat Organization, and an excellent peer group. If you are passionate about infrastructure architecture, DevOps, and cloud technologies, and are looking for a challenging leadership role in a global consulting environment, we encourage you to apply for this position based in Bangalore, India. Don't miss the opportunity to be part of the exciting growth story of Thoucentric!,

Posted 1 month ago

Apply

10.0 - 20.0 years

50 - 75 Lacs

Bengaluru

Work from Office

A leading player in cloud-based enterprise solutions is expanding its analytics leadership team in Bangalore. This pivotal role calls for a seasoned professional to drive the evolution of data products and analytics capabilities across international markets. The ideal candidate will possess the strategic vision, technical expertise, and stakeholder savvy to lead in a fast-paced, innovation-driven environment. Key Responsibilities Lead and mentor a dynamic team of product managers to scale enterprise-grade data lake and analytics platforms Drive program execution and delivery with a focus on performance, prioritization, and business alignment Define and execute the roadmap for an analytical data platform, ensuring alignment with strategic and user-centric goals Collaborate cross-functionally with engineering, design, and commercial teams to launch impactful BI solutions Translate complex business needs into scalable data models and actionable product requirement documents for multi-tenant SaaS products Champion AI-enabled analytics experiences to deliver smart, context-aware data workflows Maintain high standards in performance, usability, trust, and documentation of data products Ensure seamless execution of global data strategies through on-the-ground leadership in India Promote agile methodologies, metadata governance, and product-led thinking across teams Ideal Candidate Profile 10+ years in product leadership roles focused on data products, BI, or analytics in SaaS environments Deep understanding of modern data architectures, including dimensional modeling and cloud-native analytics tools Proven expertise in building multi-tenant data platforms serving external customer use cases Skilled in simplifying complex inputs into clear, scalable requirements and deliverables Familiarity with platforms like Deltalake, dbt, ThoughtSpot, and similar tools Strong communicator with demonstrated stakeholder management and team leadership capabilities Experience launching customer-facing analytics products is a definite plus A passion for intuitive, scalable, and intelligent user experiences powered by data

Posted 2 months ago

Apply

8.0 - 12.0 years

25 - 40 Lacs

Hyderabad

Work from Office

Key Responsibilities: Design and develop the migration strategies and processes Collaborate with stakeholders to understand business requirements and technical challenges. Analyze current data and scope for optimization during the migration process. Define the architecture and roadmap for cloud-based data and analytics transformation on Databricks. Design, implement, and optimize scalable, high-performance data architectures using Databricks. Build and manage data pipelines and workflows within Databricks. Ensure that best practices for security, scalability, and performance are followed. Implement Databricks solutions that enable machine learning, business intelligence, and data science workloads. Oversee the technical aspects of the migration process, from planning through to execution. Work closely with engineering and data teams to ensure proper migration of ETL processes, data models, and analytics workloads. Troubleshoot and resolve issues related to migration, data quality, and performance. Create documentation of the architecture, migration processes, and solutions. Provide training and support to teams post-migration to ensure they can leverage Databricks. Experience: 7+ years of experience in data engineering, cloud architecture, or related fields. 3+ years of hands-on experience with Databricks, including the implementation of data engineering solutions, migration projects, and optimizing workloads. Strong experience with cloud platforms (e.g., AWS, Azure, GCP) and their integration with Databricks. Experience in end-to-end data migration projects involving large-scale data infrastructure. Familiarity with ETL tools, data lakes, and data warehousing solutions. Skills: Expertise in Databricks architecture and best practices for data processing. Strong knowledge of Spark, Delta Lake, DLT, Lakehouse architecture, and other latest Databricks components. Proficiency in Databricks Asset Bundles Expertise in design and development of migration frameworks using Databricks Proficiency in Python, Scala, SQL, or similar languages for data engineering tasks. Familiarity with data governance, security, and compliance in cloud environments. Solid understanding of cloud-native data solutions and services.

Posted 2 months ago

Apply

7.0 - 10.0 years

12 - 18 Lacs

Hyderabad

Work from Office

Roles and Responsibilities Design, develop, test, deploy, and maintain complex Cognos TM1 solutions using various tools such as MDX, SQL, Excel, Visual Basic, DevOps, Ansible, Jenkins, Git, Configuration Management. Collaborate with cross-functional teams to gather requirements and deliver high-quality solutions on time. Troubleshoot issues related to TM1 cubes, dimensions, business rules, security settings using TM1 Development environment. Ensure seamless integration of TM1 applications with other systems through REST APIs. Desired Candidate Profile 7-10 years of experience in Cognos TM1 development with expertise in TM1 Development (Cubes), TM1 Dimensions & Business Rules. Strong understanding of SQL programming language for querying databases like PostgreSQL or Oracle/SQL Server/DB2. Experience with scripting languages like Python/Shell Scripting/Puppet/Chef/Python for automation tasks.

Posted 3 months ago

Apply

10 - 18 years

35 - 55 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Experience in Synapase with pyspark Knowledge of Big Data pipelinesData Engineering Working Knowledge on MSBI stack on Azure Working Knowledge on Azure Data factory Azure Data Lake and Azure Data lake storage Handson in Visualization like PowerBI Implement endend data pipelines using cosmosAzure Data factory Should have good analytical thinking and Problem solving Good communication and coordination skills Able to work as Individual contributor Requirement Analysis CreateMaintain and Enhance Big Data Pipeline Daily status reporting interacting with Leads Version controlADOGIT CICD Marketing Campaign experiences Data Platform Product telemetry Analytical thinking Data Validation of the new streams Data quality check of the new streams Monitoring of data pipeline created in Azure Data factory updating the Tech spec and wiki page for each implementation of pipeline Updating ADO on daily basis If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 4 months ago

Apply

10 - 20 years

35 - 55 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Mandatory Skill: Azure ADB with Azure Data Lake Lead the architecture design and implementation of advanced analytics solutions using Azure Databricks Fabric The ideal candidate will have a deep understanding of big data technologies data engineering and cloud computing with a strong focus on Azure Databricks along with Strong SQL Work closely with business stakeholders and other IT teams to understand requirements and deliver effective solutions Oversee the endtoend implementation of data solutions ensuring alignment with business requirements and best practices Lead the development of data pipelines and ETL processes using Azure Databricks PySpark and other relevant tools Integrate Azure Databricks with other Azure services eg Azure Data Lake Azure Synapse Azure Data Factory and onpremise systems Provide technical leadership and mentorship to the data engineering team fostering a culture of continuous learning and improvement Ensure proper documentation of architecture processes and data flows while ensuring compliance with security and governance standards Ensure best practices are followed in terms of code quality data security and scalability Stay updated with the latest developments in Databricks and associated technologies to drive innovation Essential Skills Strong experience with Azure Databricks including cluster management notebook development and Delta Lake Proficiency in big data technologies eg Hadoop Spark and data processing frameworks eg PySpark Deep understanding of Azure services like Azure Data Lake Azure Synapse and Azure Data Factory Experience with ETLELT processes data warehousing and building data lakes Strong SQL skills and familiarity with NoSQL databases Experience with CICD pipelines and version control systems like Git Knowledge of cloud security best practices Soft Skills Excellent communication skills with the ability to explain complex technical concepts to nontechnical stakeholders Strong problemsolving skills and a proactive approach to identifying and resolving issues Leadership skills with the ability to manage and mentor a team of data engineers Experience Demonstrated expertise of 8 years in developing data ingestion and transformation pipelines using DatabricksSynapse notebooks and Azure Data Factory Solid understanding and handson experience with Delta tables Delta Lake and Azure Data Lake Storage Gen2 Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation Proficiency in building and optimizing query layers using Databricks SQL Demonstrated experience integrating Databricks with Azure Synapse ADLS Gen2 and Power BI for endtoend analytics solutions Prior experience in developing optimizing and deploying Power BI reports Familiarity with modern CICD practices especially in the context of Databricks and cloudnative solutions If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 4 months ago

Apply

5.0 - 7.0 years

8 - 10 Lacs

chennai

Work from Office

An academic degree in Statistics, Data Analytics, Computer Science or equivalent work experience. A minimum of five (5) years of relevant professional experience in either geospatial datasets or high-volume time-series datasets Required Candidate profile A minimum of three (3) years of experience in open-source analytical stack Superset, Trino, QGIS, Kepler, Mapbox, Carto PostgreSQL DB, and Dagster. minimum of five (5) years of experience with Python

Posted Date not available

Apply

5.0 - 7.0 years

8 - 10 Lacs

pune

Work from Office

An academic degree in Statistics, Data Analytics, Computer Science or equivalent work experience. A minimum of five (5) years of relevant professional experience in either geospatial datasets or high-volume time-series datasets Required Candidate profile A minimum of three (3) years of experience in open-source analytical stack Superset, Trino, QGIS, Kepler, Mapbox, Carto PostgreSQL DB, and Dagster. minimum of five (5) years of experience with Python

Posted Date not available

Apply

5.0 - 7.0 years

8 - 10 Lacs

surat

Work from Office

An academic degree in Statistics, Data Analytics, Computer Science or equivalent work experience. A minimum of five (5) years of relevant professional experience in either geospatial datasets or high-volume time-series datasets Required Candidate profile A minimum of three (3) years of experience in open-source analytical stack Superset, Trino, QGIS, Kepler, Mapbox, Carto PostgreSQL DB, and Dagster. minimum of five (5) years of experience with Python

Posted Date not available

Apply

5.0 - 7.0 years

8 - 10 Lacs

ahmedabad

Work from Office

An academic degree in Statistics, Data Analytics, Computer Science or equivalent work experience. A minimum of five (5) years of relevant professional experience in either geospatial datasets or high-volume time-series datasets Required Candidate profile A minimum of three (3) years of experience in open-source analytical stack Superset, Trino, QGIS, Kepler, Mapbox, Carto PostgreSQL DB, and Dagster. minimum of five (5) years of experience with Python

Posted Date not available

Apply

5.0 - 7.0 years

8 - 10 Lacs

jaipur

Work from Office

An academic degree in Statistics, Data Analytics, Computer Science or equivalent work experience. A minimum of five (5) years of relevant professional experience in either geospatial datasets or high-volume time-series datasets Required Candidate profile A minimum of three (3) years of experience in open-source analytical stack Superset, Trino, QGIS, Kepler, Mapbox, Carto PostgreSQL DB, and Dagster. minimum of five (5) years of experience with Python

Posted Date not available

Apply

5.0 - 7.0 years

8 - 10 Lacs

bengaluru

Work from Office

An academic degree in Statistics, Data Analytics, Computer Science or equivalent work experience. A minimum of five (5) years of relevant professional experience in either geospatial datasets or high-volume time-series datasets Required Candidate profile A minimum of three (3) years of experience in open-source analytical stack Superset, Trino, QGIS, Kepler, Mapbox, Carto PostgreSQL DB, and Dagster. minimum of five (5) years of experience with Python

Posted Date not available

Apply

5.0 - 7.0 years

8 - 10 Lacs

visakhapatnam

Work from Office

An academic degree in Statistics, Data Analytics, Computer Science or equivalent work experience. A minimum of five (5) years of relevant professional experience in either geospatial datasets or high-volume time-series datasets Required Candidate profile A minimum of three (3) years of experience in open-source analytical stack Superset, Trino, QGIS, Kepler, Mapbox, Carto PostgreSQL DB, and Dagster. minimum of five (5) years of experience with Python

Posted Date not available

Apply

5.0 - 7.0 years

8 - 10 Lacs

kolkata

Work from Office

An academic degree in Statistics, Data Analytics, Computer Science or equivalent work experience. A minimum of five (5) years of relevant professional experience in either geospatial datasets or high-volume time-series datasets Required Candidate profile A minimum of three (3) years of experience in open-source analytical stack Superset, Trino, QGIS, Kepler, Mapbox, Carto PostgreSQL DB, and Dagster. minimum of five (5) years of experience with Python

Posted Date not available

Apply

5.0 - 7.0 years

8 - 10 Lacs

patna

Work from Office

An academic degree in Statistics, Data Analytics, Computer Science or equivalent work experience. A minimum of five (5) years of relevant professional experience in either geospatial datasets or high-volume time-series datasets Required Candidate profile A minimum of three (3) years of experience in open-source analytical stack Superset, Trino, QGIS, Kepler, Mapbox, Carto PostgreSQL DB, and Dagster. minimum of five (5) years of experience with Python

Posted Date not available

Apply

5.0 - 7.0 years

8 - 10 Lacs

hyderabad

Work from Office

An academic degree in Statistics, Data Analytics, Computer Science or equivalent work experience. A minimum of five (5) years of relevant professional experience in either geospatial datasets or high-volume time-series datasets Required Candidate profile A minimum of three (3) years of experience in open-source analytical stack Superset, Trino, QGIS, Kepler, Mapbox, Carto PostgreSQL DB, and Dagster. minimum of five (5) years of experience with Python

Posted Date not available

Apply

5.0 - 7.0 years

8 - 10 Lacs

nagpur

Work from Office

An academic degree in Statistics, Data Analytics, Computer Science or equivalent work experience. A minimum of five (5) years of relevant professional experience in either geospatial datasets or high-volume time-series datasets Required Candidate profile A minimum of three (3) years of experience in open-source analytical stack Superset, Trino, QGIS, Kepler, Mapbox, Carto PostgreSQL DB, and Dagster. minimum of five (5) years of experience with Python

Posted Date not available

Apply

5.0 - 7.0 years

8 - 10 Lacs

mumbai

Work from Office

An academic degree in Statistics, Data Analytics, Computer Science or equivalent work experience. A minimum of five (5) years of relevant professional experience in either geospatial datasets or high-volume time-series datasets Required Candidate profile A minimum of three (3) years of experience in open-source analytical stack Superset, Trino, QGIS, Kepler, Mapbox, Carto PostgreSQL DB, and Dagster. minimum of five (5) years of experience with Python

Posted Date not available

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies