Home
Jobs
Companies
Resume
5 Job openings at 7dxperts
About 7dxperts

We are excited about the launch of 7Dxperts, as part of our teams ongoing commitment to driving growth and innovation in the space data, analytics, ML and geospatial . To ensure our continued growth and focus, we made the strategic decision to spin out the analytics business from zsah ltd. This move will enable us to invest more in our propositions and our staff while pushing the boundaries of what's possible in the realm of data. We firmly believe that targeted solutions designed for specific use cases hold more power than generic solutions. Therefore, at the core of the business, its about bringing people together who care about customers, have passion to solve problems and xpertise in building targeted accelerators/solutions for industry specific problems. 📌 Visit our website to get to know us better.

Data Engineer

Bengaluru, Bangalore Rural

5 - 8 years

INR 30.0 - 35.0 Lacs P.A.

Hybrid

Full Time

Role & responsibilities 3+ years of experience in Spark, Databricks, Hadoop, Data and ML Engineering. 3+ Years on experience in designing architectures using AWS cloud services & Databricks. Architecture, design and build Big Data Platform (Data Lake / Data Warehouse / Lake house) using Databricks services and integrating with wider AWS cloud services. Knowledge & experience in infrastructure as code and CI/CD pipeline to build and deploy data platform tech stack and solution. Hands-on spark experience in supporting and developing Data Engineering (ETL/ELT) and Machine learning (ML) solutions using Python, Spark, Scala or R languages. Distributed system fundamentals and optimising Spark distributed computing. Experience in setting up batch and streams data pipeline using Databricks DLT, jobs and streams. Understand the concepts and principles of data modelling, Database, tables and can produce, maintain, and update relevant data models across multiple subject areas. Design, build and test medium to complex or large-scale data pipelines (ETL/ELT) based on feeds from multiple systems using a range of different storage technologies and/or access methods, implement data quality validation and to create repeatable and reusable pipelines. Experience in designing metadata repositories, understanding range of metadata tools and technologies to implement metadata repositories and working with metadata. Understand the concepts of build automation, implementing automation pipelines to build, test and deploy changes to higher environments. Define and execute test cases, scripts and understand the role of testing and how it works. Preferred candidate profile Big Data technologies Databricks, Spark, Hadoop, EMR or Hortonworks. Solid hands-on experience in programming languages Python, Spark, SQL, Spark SQL, Spark Streaming, Hive and Presto Experience in different Databricks components and API like notebooks, jobs, DLT, interactive and jobs cluster, SQL warehouse, policies, secrets, dbfs, Hive Metastore, Glue Metastore, Unity Catalog and ML Flow. Knowledge and experience in AWS Lambda, VPC, S3, EC2, API Gateway, IAM users, roles & policies, Cognito, Application Load Balancer, Glue, Redshift, Spectrum, Athena and Kinesis. Experience in using source control tools like git, bit bucket or AWS code commit and automation tools like Jenkins, AWS Code build and Code deploy. Hands-on experience in terraform and Databricks API to automate infrastructure stack. Experience in implementing CI/CD pipeline and ML Ops pipeline using Git, Git actions or Jenkins. Experience in delivering project artifacts like design documents, test cases, traceability matrix and low-level design documents. Build references architectures, how-tos, and demo applications for customers. Ready to complete certifications. Perks and benefits Hands-on AWS, Azure, and GCP. Support you on any certification. Build leadership skills. Medical Insurance coverage for self and family. Provident Fund

Data Engineer

Bengaluru

6 - 11 years

INR 30.0 - 35.0 Lacs P.A.

Work from Office

Full Time

Role & responsibilities Data Pipeline Management: Design, develop, and maintain robust data pipelines that facilitate efficient data processing, transformation, and loading. Optimize these processes for performance and scalability. ETL Processes: Architect and implement Extract, Transform, Load (ETL) processes to integrate and transform raw data from various sources into meaningful, usable formats for data analytics. Data Quality Assurance: Implement data quality checks and validation processes to ensure the integrity and consistency of data. Identify and resolve data anomalies and discrepancies. Scalability and Performance: Continuously monitor and enhance data processing systems to ensure they meet the growing needs of the organization. Optimize data architectures for speed and efficiency. Innovation and Improvement: Stay updated with the latest industry trends and technologies. Proactively suggest improvements to data systems and processes to enhance efficiency and effectiveness. And make sure that do not impact the pipeline and other technical processes and conflicts should not be occurred. Documentation and Compliance: Maintain comprehensive documentation of data processes, architectures, and workflows. Ensure compliance with data governance and security policies. Preferred candidate profile Data Processing Tools: Proficiency in data processing tools such as PySpark and Pandas for large-scale data manipulation and analysis. Databricks: Knowledge of Databricks for collaborative data engineering, data processing. Automation and Templates: Experience with Python for scripting templates and automation scripts. Cloud Platforms: Experience with cloud platforms (e.g., AWS, Azure, GCP) for data storage and processing. Problem-Solving: Strong analytical skills with the ability to diagnose issues and develop effective solutions quickly. Continuous Learning: Enthusiastic about learning new technologies and staying updated with industry trends to drive innovation in data engineering practices. Adaptability: Flexible and adaptable to changing project requirements and priorities. Capable of handling multiple tasks and projects simultaneously. Team Collaboration: Ability to work collaboratively in a team environment and contribute to cross-functional projects. Communication: Excellent verbal and written communication skills to effectively convey technical information to non-technical stakeholders. Any Graduate with Computer Science Background. Mandatory certifications in Python and SQL. Additional certifications in cloud technologies or data engineering are preferred. Good to have certification on Databricks and Data Engineering Concepts 3 to 5 years of experience in data engineering, with a strong focus on data pipeline development, ETL processes, data warehousing, and templates. Experience in working with cloud-based data systems. Perks and benefits Training in Databricks. Support on certifications. Hands-on AWS, Azure, and GCP. Support you on any cloud certification. Build leadership skills.

Map Developer

Bengaluru

5 - 8 years

INR 10.0 - 15.0 Lacs P.A.

Hybrid

Full Time

Role & responsibilities Develop interactive maps using libraries and technologies like Leaflet.js, Mapbox, Google Maps API, and OpenLayers. Implement H3 Indexing for spatial partitioning and optimization to improve data analysis and map rendering performance. Manage and optimize geospatial data querying, storage, and transformation using Snowflake and Databricks. Leverage DuckDB for efficient local geospatial querying and real-time analysis. Develop and maintain clean, scalable, and type-safe code using TypeScript for frontend and backend geospatial solutions. Build spatial queries, conduct geospatial analysis, and optimize pipelines for mapping and visualization tasks. Collaborate with data engineers and backend developers to integrate geospatial data pipelines into cloud platforms (e.g., Snowflake and Databricks). Work with GIS tools (QGIS, ArcGIS) to analyse and visualize large-scale spatial data. Integrate mapping tools with cloud platforms and automate data workflows for geospatial analytics. Stay up-to-date with the latest tools and technologies in cloud data platforms, geospatial mapping, and spatial data indexing. Preferred candidate profile Bachelors degree in computer science, Geographic Information Systems (GIS), Data Engineering, or a related field. Proficiency in mapping libraries/APIs: Google Maps, Mapbox, Leaflet, OpenLayers, or similar. Experience with H3 Index for spatial indexing, analysis, and partitioning. Strong hands-on experience with Snowflake and Databricks for managing, analysing, and processing large-scale geospatial data. Proficiency with DuckDB for real-time geospatial querying. Strong programming skills in TypeScript and modern web technologies (HTML, CSS, JavaScript). Experience working with geospatial data formats: GeoJSON, KML, Shapefiles, and GPX. Familiarity with GIS software (QGIS, ArcGIS) for spatial data analysis. Solid understanding of SQL and experience optimizing spatial queries. Ability to collaborate in a cross-functional team and integrate solutions with cloud services. Perks and benefits Training in Databricks. Support on certifications. Hands-on AWS, Azure, and GCP. Support you on any cloud certification. Build leadership skills.

Data Warehouse /ETL - Test Lead/Manager

Bengaluru

7 - 12 years

INR 15.0 - 27.5 Lacs P.A.

Hybrid

Full Time

Role & responsibilities Lead and manage a team of QA testers for ETL, data warehouse, and BI report testing. Define and implement test strategies and test plans for DWH projects, covering data pipelines, data integrity, and reporting accuracy. Develop and execute comprehensive test cases and SQL queries to validate data accuracy, completeness, and transformation logic. Test ETL workflows for data extraction, transformation, and loading processes, ensuring alignment with business requirements. Perform data validation testing and reconciliation across data sources, staging, and data warehouses. Test and validate BI reports and dashboards built using tools like Tableau, Power BI, and ThoughtSpot, ensuring data correctness, visual accuracy, and performance. Collaborate with BI developers and stakeholders to validate KPIs, data visualizations, and user interface requirements. Perform performance testing for complex BI dashboards, ensuring scalability and optimal query performance. Oversee defect management and issue resolution using tools like Jira or Azure DevOps. Conduct root cause analysis for data issues and collaborate with engineering teams for resolution. Automate data validation processes using scripting tools such as Python or frameworks like dbt to optimize testing efficiency. Generate test reports and metrics, providing updates on testing progress, issues, and resolution status. Drive continuous improvements in QA processes, tools, and techniques to ensure testing scalability and efficiency. Ensure all testing adheres to organizational quality standards and best practices for DWH and BI projects. Preferred candidate profile Bachelors degree in Computer Science, Information Systems, or a related field. 5+ years of experience in software testing with a strong focus on Data Warehousing, ETL testing , and BI testing . Solid understanding of data warehouse concepts (star schema, snowflake schema, OLAP, and dimensional modeling). Proficiency in writing complex SQL queries for data validation, reconciliation, and transformation testing. Hands-on experience testing ETL tools such as Informatica, Talend, Apache Airflow, or dbt . Expertise in testing and validating reports and dashboards built on BI tools: Tableau Power BI ThoughtSpot Familiarity with cloud-based DWH platforms like Snowflake , Databricks , AWS Redshift , or Azure Synapse . Experience with defect management tools such as Jira , TestRail , or Azure DevOps . Strong analytical skills with the ability to troubleshoot data quality and performance issues. Experience with performance testing and optimization for BI dashboards and large-scale datasets. Excellent communication, leadership, and stakeholder management skills. Hands-on experience in automating data validation using scripting languages like Python or tools like dbt. Familiarity with Big Data tools like Apache Spark, Hive, or Kafka. ISTQB certification or similar QA certifications. Working knowledge of CI/CD pipelines and integration of automated data tests. Experience with data governance, security, and compliance practices. Perks and benefits Training in Databricks. Training and certification in Thoughtspot. Support on certifications. Hands-on AWS, Azure, and GCP. Support you on any cloud certification. Build leadership skills.

Data Engineer

Bengaluru

5 - 8 years

INR 15.0 - 20.0 Lacs P.A.

Work from Office

Full Time

Role & responsibilities 3+ years of experience in Spark, Databricks, Hadoop, Data and ML Engineering. 3+ Years on experience in designing architectures using AWS cloud services & Databricks. Architecture, design and build Big Data Platform (Data Lake / Data Warehouse / Lake house) using Databricks services and integrating with wider AWS cloud services. Knowledge & experience in infrastructure as code and CI/CD pipeline to build and deploy data platform tech stack and solution. Hands-on spark experience in supporting and developing Data Engineering (ETL/ELT) and Machine learning (ML) solutions using Python, Spark, Scala or R languages. Distributed system fundamentals and optimising Spark distributed computing. Experience in setting up batch and streams data pipeline using Databricks DLT, jobs and streams. Understand the concepts and principles of data modelling, Database, tables and can produce, maintain, and update relevant data models across multiple subject areas. Design, build and test medium to complex or large-scale data pipelines (ETL/ELT) based on feeds from multiple systems using a range of different storage technologies and/or access methods, implement data quality validation and to create repeatable and reusable pipelines Experience in designing metadata repositories, understanding range of metadata tools and technologies to implement metadata repositories and working with metadata. Understand the concepts of build automation, implementing automation pipelines to build, test and deploy changes to higher environments. Define and execute test cases, scripts and understand the role of testing and how it works. Preferred candidate profile Big Data technologies Databricks, Spark, Hadoop, EMR or Hortonworks. Solid hands-on experience in programming languages Python, Spark, SQL, Spark SQL, Spark Streaming, Hive and Presto Experience in different Databricks components and API like notebooks, jobs, DLT, interactive and jobs cluster, SQL warehouse, policies, secrets, dbfs, Hive Metastore, Glue Metastore, Unity Catalog and ML Flow. Knowledge and experience in AWS Lambda, VPC, S3, EC2, API Gateway, IAM users, roles & policies, Cognito, Application Load Balancer, Glue, Redshift, Spectrum, Athena and Kinesis. Experience in using source control tools like git, bit bucket or AWS code commit and automation tools like Jenkins, AWS Code build and Code deploy. Hands-on experience in terraform and Databricks API to automate infrastructure stack. Experience in implementing CI/CD pipeline and ML Ops pipeline using Git, Git actions or Jenkins. Experience in delivering project artifacts like design documents, test cases, traceability matrix and low-level design documents. Build references architectures, how-tos, and demo applications for customers. Ready to complete certifications

7dxperts

7dxperts

IT Services and IT Consulting

London England

11-50 Employees

5 Jobs

    Key People

  • Jane Doe

    CEO
  • John Smith

    CTO

My Connections 7dxperts

Download Chrome Extension (See your connection in the 7dxperts )

chrome image
Download Now
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Job Titles Overview

Data Engineer (3)
Map Developer (1)
Data Warehouse /ETL - Test Lead/Manager (1)