5 - 7 years
0 Lacs
Posted:6 days ago|
Platform:
On-site
Full Time
We are scaling an AI/ML enabled Enterprise SAAS solution to help manage cash performance of large enterprises, including multiple Fortune-500 companies. You would be owning the architecture responsibility during the 1-10 journey of the product in the FinTech AI space.
Preferences:
Interview Process:
3 Technical Sessions + 1 CTO Round + 1 F2F - Managerial Round (MUST)
Job Role:
Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform
Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, and Jenkins to ensure reliable and timely data processing
Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for efficient data retrieval and processing
Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, DocumentDB) to build scalable data solutions
Design and implement data warehouse solutions that support analytical needs and machine learning applications
Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features
Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability
Optimize query performance across various database systems through indexing, partitioning, and query refactoring Develop and maintain documentation for data models, pipelines, and processes
Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs
Stay current with emerging technologies and best practices in data engineering
Ability to perform independent research to understand the product requirements and customer needs
Communicates effectively with the project teams and other stakeholders. Translate technical details to non-technical audience.
Expert at creating architectural artifacts for Data Warehouse.
Team, effort management.
Ability to set expectations for the client and the team. Ensure all deliverables are delivered in time at highest quality.
5+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure
Strong proficiency in SQL and experience with relational databases like MySQL and PostgreSQL
Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB
Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka, Debezium, Airflow, or similar technologies
Experience with data warehousing concepts and technologies
Solid understanding of data modeling principles and best practices for both operational and analytical systems
Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning
Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack
Proficiency in at least one programming language (Python, Node.js, Java)
Experience with version control systems (Git) and CI/CD pipelines
Bachelor&aposs degree in computer science, Engineering, or related field from Premium Colleges - IIT / NIT / BITS / REC / NIT
Accedepro Private Limited
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Hyderabad, Telangana, India
Salary: Not disclosed