Posted:1 week ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

MTS 3 ( 5- 8 yrs)



About the Role

We're seeking a Senior Data Engineer to join our team at SMC Global Securities. In this role,you will be crucial in building and maintaining the data infrastructure that powers our financialservices and trading platforms. You will work with a diverse tech stack to handle massivevolumes of real-time and historical financial data, ensuring that our analytics, research, andbusiness teams have access to high-quality, reliable information to support our brokerage,wealth management, and advisory services.You will be responsible for designing, governing,and evolving the core data infrastructure that powers our financial services, trading platforms,and strategic decision-making


Responsibilities

  • Design, build, and maintain highly efficient and scalable real-time & batch data pipelines

toingest, process, and analyze financial data, including market data, trades, and client information.

  • Own the high-level design and architecture for our Data Lakehouse environments, ensuring they align with business strategy and scalability requirements.
  • Implement and enforce data modeling standards with a strong understanding of various modeling techniques, including a good understanding of Data Vault, Dimensional, andstar/snowflake schema modeling, to create a robust and flexible data architecture for financial data.
  • Build data pipelines using a Medallion Architecture , progressing data through Bronze (raw),Silver (cleansed), and Gold (analytics-ready) layers to ensure data quality, lineage, andauditability for regulatory compliance.
  • Develop and optimize composable data architectures

, focusing on creating modular,reusable data components that can be easily combined to support different business needs,such as risk management, algorithmic trading, and research analytics.

Develop and optimize data transformation processes for financial data using tools like DBT and data processing frameworks like Apache Spark on EMR Manage and maintain data storage solutions across our Data Lake (S3) Data Warehouse(Redshift) , and Data Lakehouse

architectures, including Hive & Iceberg tables , to handle large-scale financial datasets.

  • Write and optimize complex SQL queries and Python scripts

for financial data extraction ,transformation, and loading (ETL/ELT). Review code and designs from peers, providing constructive feedback to elevate team output, enforce best practices, and ensure system reliability.

Implement and orchestrate data workflows for market data ingestion, trade processing, and reporting using DLT, DBT, Pyspark and Apache Airflow .Utilize AWS cloud services such as

Lambda, Glue, S3 Athena , and Redshift to build robustdata solutions.

  • Collaborate with research analysts, quant traders, and business intelligence teams

to understand data needs and build data models that support financial analysis, reporting (e.g.,IPO reports), and the development of trading tools.

Architect and enforce the implementation of a comprehensive Data Quality framework to ensure integrity, accuracy, and reliability of all financial data products Proactively manage stakeholders , translating complex business needs (e.g., for riskmanagement, algorithmic trading, IPO reports) into clear technical specifications and datasolutions.

Document

  • data pipelines, data models, and processes for maintainability and knowledgesharing.

Requirements

  • Strong proficiency in SQL and Python is a must, with a focus on data manipulation andanalysis.
  • Good understanding of data modeling concepts and experience with various delingtechniques in a financial context.
  • Strong understanding of Medallion and composable data architectures

.Solid understanding of data architecture concepts including Data Lake, Data Warehouse,and Data Lakehouse

  • Hands-on experience with AWS cloud services , including but not limited to, S3, Redshift,Athena, Glue, EMR, and Lambda, Experience with open-source data tools like Airflow, DBT, DLT and Airbyte


Familiarity with Hive & Iceberg tablesis essential.

Proven experience building and maintaining data pipelines , handling large volumes offinancial data.

Experience with reporting or business intelligence tools (e.g., Metabase).

Excellent problem-solving skills and the ability to work independently or as part of a team.

Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.

Prior experience in the financial services, trading, or fintech industry is highly preferred.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

hyderabad, chennai, bengaluru

hyderabad, chennai, bengaluru

chennai, bengaluru, delhi / ncr, remote

chennai, delhi / ncr, bengaluru