Jobs
Interviews

2 Aws Documentdb Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are scaling an AI/ML enabled Enterprise SAAS solution to help manage cash performance of large enterprises, including multiple Fortune-500 companies. You would be owning the architecture responsibility during the 1-10 journey of the product in the FinTech AI space. Senior Data Engineer [Fintech] | 6-9 Y | Hyderabad (Hybrid) | AI Product Company | Preferences: Fintech Exp and Locals, F2F Required - Final Round @ Hyderabad Office Engineering & CS Graduates from Premium Colleges - IIT / NIT / BITS / REC / NIT Interview Process: 3 Technical Sessions + 1 CTO Round + 1 F2F - Managerial Round (MUST) Job Role: Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, and Jenkins to ensure reliable and timely data processing Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for efficient data retrieval and processing Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, DocumentDB) to build scalable data solutions Design and implement data warehouse solutions that support analytical needs and machine learning applications Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability Optimize query performance across various database systems through indexing, partitioning, and query refactoring Develop and maintain documentation for data models, pipelines, and processes Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs Stay current with emerging technologies and best practices in data engineering Ability to perform independent research to understand the product requirements and customer needs Communicates effectively with the project teams and other stakeholders. Translate technical details to non-technical audience. Expert at creating architectural artifacts for Data Warehouse. Team, effort management. Ability to set expectations for the client and the team. Ensure all deliverables are delivered in time at highest quality. Required Skills: 5+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure Strong proficiency in SQL and experience with relational databases like MySQL and PostgreSQL Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka, Debezium, Airflow, or similar technologies Experience with data warehousing concepts and technologies Solid understanding of data modeling principles and best practices for both operational and analytical systems Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack Proficiency in at least one programming language (Python, Node.js, Java) Experience with version control systems (Git) and CI/CD pipelines Bachelor&aposs degree in computer science, Engineering, or related field from Premium Colleges - IIT / NIT / BITS / REC / NIT Strictly, prefer applicants within 0-60 days NP only! Apply Here. Show more Show less

Posted 1 month ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Hyderabad

Work from Office

Responsibilities: Design, build, and optimize data pipelines to ingest, process, transform, and load data from various sources into our data platform Implement and maintain ETL workfl ows using tools like Debezium, Kafka, Airfl ow, and Jenkins to ensure reliable and timely data processing Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for effi cient data retrieval and processing *** Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, DocumentDB) to build scalable data solutions Design and implement data warehouse solutions that support analytical needs and machine learning applications Collaborate with data scientists and ML engineers to prepare data for AI/ML models and implement data-driven features Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability Optimize query performance across various database systems through indexing, partitioning, and query refactoring Develop and maintain documentation for data models, pipelines, and processes Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs Stay current with emerging technologies and best practices in data engineering Requirements: 5+ years of experience in data engineering or related roles with a proven track record of building data pipelines and infrastructure Strong profi ciency in SQL and experience with relational databases like MySQL and PostgreSQL Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka, Debezium, Airfl ow, or similar technologies Experience with data warehousing concepts and technologies Solid understanding of data modeling principles and best practices for both operational and analytical systems Proven ability to optimize database performance, including query optimization, indexing strategies, and database tuning Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack Profi ciency in at least one programming language (Python, Node.js, Java) Experience with version control systems (Git) and CI/CD pipelines Job Description: Experience with graph databases (Neo4j, Amazon Neptune) Knowledge of big data technologies such as Hadoop, Spark, Hive, and data lake architectures Experience working with streaming data technologies and real-time data processing Familiarity with data governance and data security best practices Experience with containerization technologies (Docker, Kubernetes) Understanding of fi nancial back-offi ce operations and FinTech domain Experience working in a high-growth startup environment

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies