Posted:1 month ago|
                                Platform:
                                
                                
                                
                                
                                
                                
                                
                                
                                
                                
                                
                                
                                
                                
                                
                                
                                
                                
                                
                                
                                
                            
Work from Office
Full Time
• Strong hands-on experience in Python programming and PySpark • Experience using AWS services (RedShift, Glue, EMR, S3 & Lambda) • Experience working with Apache Spark and Hadoop ecosystem. • Experience in writing and optimizing SQL for data manipulations • Good Exposure to scheduling tools. Airflow is preferable. • Must Have Data Warehouse Experience with AWS Redshift or Hive • Experience in implementing security measures for data protection. • Expertise to build/test complex data pipelines for ETL processes (batch and near real time) • Readable documentation of all the components being developed. • Knowledge of Database technologies for OLTP and OLAP workloads
Good understanding of Data warehouse and Data Lakes • Familiarize ETL tools like Netezza or Informatica. • Experience working with NoSQL databases like DynamoDB or MongoDB. • Good to have exposure to AWS services (Step Functions, Athena) • Data Modelling exposure • Familiarity with Investment Banking domain
1Create and maintain optimal data pipeline architecture for efficient and reliable data processing.
2 Assemble large, complex data sets that meet functional / non-functional business requirements.
3 Database management, ETL process, Data modelling, ensuring data quality and integrity.
4 Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5 Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
                Tata Consultancy Services
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
        Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
            
        
                        
                    
    pune, chennai, bengaluru
15.0 - 30.0 Lacs P.A.
bhubaneswar, hyderabad, pune
0.5 - 0.5 Lacs P.A.
hyderabad, chennai, bengaluru
25.0 - 30.0 Lacs P.A.
hyderabad, pune, bengaluru
10.0 - 20.0 Lacs P.A.
noida, kolkata, pune
8.0 - 11.0 Lacs P.A.
16.0 - 25.0 Lacs P.A.
bengaluru
0.5 - 3.0 Lacs P.A.
pune, chennai, mumbai (all areas)
0.5 - 3.0 Lacs P.A.
bengaluru
15.0 - 25.0 Lacs P.A.
bengaluru
10.0 - 20.0 Lacs P.A.