About Indxx
Indxx seeks to redefine the global indexing space.We create indices which are used to create Financial Products and Benchmarks.Founded in 2005 and with offices in New York and New Delhi, the firm focuses on Index Development, Index Calculation, Analytics & Research and combines these services in a holistic, customized approach that is unique to the industry and provides maximum benefits to our clients.In 2009, we launched our first indices that serve as the underlying benchmarks for ETFs. Since then, we have successfully developed and licensed a multitude of indices to clients globally, offer real-time and end of day calculation services, and provide many fund managers with tailor-made research and analytics.Location : Gurgaon.
Role
We are looking for a Lead Data Engineer to join our team and drive the development of data warehouse, data pipelines, ETL processes, and data integrations to support our data analytics and business intelligence needs.
Technical Skills Required
In this role, you will design, build, and maintain efficient and reliable data pipelines, transforming raw data into actionable insights. You will work closely with developers, analysts, and business stakeholders to optimize data flow and ensure data accessibility across the :
- Strong proficiency in writing & optimizing large, complicated SQL queries, and SQL Server Stored Procedures.
- Develop, optimize, and maintain scalable data pipelines and ETL processes to support ongoing data analytics, and reporting.
- Build and manage data infrastructure, ensuring the efficient storage, transformation, and retrieval of structured and unstructured data from multiple sources.
- Collaborate with developers, analysts, and other engineers to ensure data accuracy, quality, and availability for analysis and reporting.
- Design, implement, and maintain data architecture, databases, and warehouses, supporting business intelligence and advanced analytics requirements.
- Monitor and troubleshoot data pipeline performance and resolve any issues to ensure data processing efficiency and reliability.
- Develop and enforce data quality standards, data governance policies, and data security protocols across the data lifecycle.
- Leverage cloud platforms (AWS, GCP, Azure) for data engineering tasks, implementing best practices for cloud-native data storage, processing, and pipeline automation.
Ideally, You Should Have
- Min. 5- 8 years of experience working and building Data-Intensive Applications.
- Proficiency in programming languages such as Python, or Scala for data processing and manipulation.
- Experience with data warehousing solutions like Snowflake, Redshift, or BigQuery.
- Familiarity with cloud platforms (AWS, GCP, or Azure) and their respective data services (e.g., S3, EMR, BigQuery).
- Strong understanding of relational and non-relational databases, including MySQL, PostgreSQL, MongoDB, and Cassandra.
- Solid understanding of data modelling, data warehousing concepts, and data governance best practices.
- Handling common database requirements such as upgrades, backup, recovery, migration, etc.
- Experience with data pipeline tools such as Apache Airflow, DBT for ETL orchestration and automation would be an added advantage.
- Familiarity with CI/CD tools and practices for data engineering.
- Experience with data quality frameworks, monitoring, and validation techniques.
Benefits
- Work on a ground-breaking FinTech product innovation with a global impact.
- Opportunity to work with an open-minded, experienced, and talented cross-functional and cross-cultural teams.
- Collaboration with experienced management and an incredible network of board members and investors.
(ref:hirist.tech)