Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 years
0 Lacs
hyderabad, telangana, india
On-site
Role Summary & Role Description: Technical Manager with specific Oracle, PL/SQL and design, develop, and optimize data workflows on the Databricks platform. The ideal candidate will have deep expertise in Apache Spark, PySpark, Python, job orchestration, and CI/CD integration to support scalable data engineering and analytics solutions. Analyzes, designs, develops and maintains software applications to support business units. Expected to spend 80% of the time on hands-on development, design and architecture and remaining 20% on guiding the team on technology and removing other impediments Capital Markets Projects experience preferred Provides advanced technical expertise in analyzing, designing, estimating, and developing software applications to project schedule. Oversees systems design and implementation of most complex design components. Creates project plans and deliverables and monitors task deadlines. Oversees, maintains and supports existing software applications. Provides subject matter expertise in reviewing, analyzing, and resolving complex issues. Designs and executes end to end system tests of new installations and/or software prior to release to minimize failures and impact to business and end users. Responsible for resolution, communication, and escalation of critical technical issues. Prepares user and systems documentation as needed. Identifies and recommends Industry best practices. Serves as a mentor to junior staff. Acts as a technical lead/mentor for developers in day to day and overall project areas. Ability to lead a team of agile developers. Worked in a complex deadline driven projects with minimal supervision. Ability to architect/design/develop with minimum requirements by effectively coordinating activities between business analysts, scrum leads, developers and managers. Ability to provide agile status notes on day to day project tasks. Key Responsibilities: Data Lakehouse Development: Design and implement scalable data Lakehouse solutions using Databricks and Delta Lake . Develop and maintain Delta Lake tables with ACID transactions and schema evolution. Data Ingestion & Autoloaders: Build ingestion pipelines using Databricks Autoloader for real-time and batch data. Integrate data from various sources including cloud storage (e.g., ADLS, S3), databases, APIs, and streaming platforms. ETL & Data Enrichment: Develop ETL workflows to cleanse, transform, and enrich raw data. Implement business logic and data quality checks to ensure reliability and accuracy. Performance Optimization: Optimize Spark jobs for performance and cost-efficiency. Monitor and troubleshoot data pipelines using Databricks tools and logging frameworks. Data Access & Governance: Enable secure and governed access to data using Unity Catalog or other access control mechanisms. Collaborate with data analysts and scientists to ensure data availability and usability. Collaboration & Documentation: Work closely with cross-functional teams including data architects, analysts, and business stakeholders. Document data models, pipeline designs, and operational procedures. Technical Skills: Design and implement robust ETL pipelines using Databricks notebooks and workflows. Strong proficiency in Apache Spark, PySpark, and Databricks. Hands-on experience with Delta Lake, Autoloader, and Structured Streaming. Proficiency in SQL, Python, and cloud platforms (Azure, AWS, or GCP). Experience with ETL frameworks, data modeling, and data warehousing concepts. Familiarity with CI/CD, Git, and DevOps practices in data engineering. Knowledge of data governance, security, and compliance standards. Optimize Spark jobs for performance and cost-efficiency. Develop and manage job orchestration strategies using Databricks Jobs and Workflows. Monitor and troubleshoot production jobs, ensuring reliability and data quality. Implement security and governance best practices including access control and encryption. Strong Practical experience using Scrum, Agile modelling and adaptive software development. Ability to understand and grasp the big picture of system components. Experience building environment and architecture and design guides and architecture and application blueprints. Strong understanding of data modeling, warehousing, and performance tuning. Excellent problem-solving and communication skills. Core/Must have skills: Oracle, SQL, PLSQL, Python, Scala, Apache Spark, Spark Streaming, CI CD pipeline, AWS cloud experience Exposure to real-time data processing and event-driven architectures . Good to have skills Databricks certifications (e.g., Data Engineer Associate/Professional). Experience with Unity Catalog , MLflow , or Delta Live Tables Work Schedule: 12 PM IST to 9 PM (IST) Why this role is important to us: Our technology function, Global Technology Services (GTS), is vital to State Street and is the key enabler for our business to deliver data and insights to our clients. We're driving the company's digital transformation and expanding business capabilities using industry best practices and advanced technologies such as cloud, artificial intelligence and robotics process automation. We offer a collaborative environment where technology skills and innovation are valued in a global organization. We're looking for top technical talent to join our team and deliver creative technology solutions that help us become an end-to-end, next-generation financial services company. Join us if you want to grow your technical skills, solve real problems and make your mark on our industry. About State Street: What we do. State Street is one of the largest custodian banks, asset managers and asset intelligence companies in the world. From technology to product innovation, we're making our mark on the financial services industry. For more than two centuries, we've been helping our clients safeguard and steward the investments of millions of people. We provide investment servicing, data & analytics, investment research & trading and investment management to institutional clients. Work, Live and Grow. We make all efforts to create a great work environment. Our benefits packages are competitive and comprehensive. Details vary by location, but you may expect generous medical care, insurance and savings plans, among other perks. You'll have access to flexible Work Programs to help you match your needs. And our wealth of development programs and educational support will help you reach your full potential. Inclusion, Diversity and Social Responsibility. We truly believe our employees diverse backgrounds, experiences and perspectives are a powerful contributor to creating an inclusive environment where everyone can thrive and reach their maximum potential while adding value to both our organization and our clients. We warmly welcome candidates of diverse origin, background, ability, age, sexual orientation, gender identity and personality. Another fundamental value at State Street is active engagement with our communities around the world, both as a partner and a leader. You will have tools to help balance your professional and personal life, paid volunteer days, matching gift programs and access to employee networks that help you stay connected to what matters to you. State Street is an equal opportunity and affirmative action employer. Discover more at StateStreet.com/careers
Posted 1 week ago
0.0 years
0 Lacs
hyderabad, telangana, india
On-site
Role Summary & Role Description: Technical Manager with specific Oracle, PL/SQL and design, develop, and optimize data workflows on the Databricks platform. The ideal candidate will have deep expertise in Apache Spark, PySpark, Python, job orchestration, and CI/CD integration to support scalable data engineering and analytics solutions. Analyzes, designs, develops and maintains software applications to support business units. Expected to spend 80% of the time on hands-on development, design and architecture and remaining 20% on guiding the team on technology and removing other impediments Capital Markets Projects experience preferred Provides advanced technical expertise in analyzing, designing, estimating, and developing software applications to project schedule. Oversees systems design and implementation of most complex design components. Creates project plans and deliverables and monitors task deadlines. Oversees, maintains and supports existing software applications. Provides subject matter expertise in reviewing, analyzing, and resolving complex issues. Designs and executes end to end system tests of new installations and/or software prior to release to minimize failures and impact to business and end users. Responsible for resolution, communication, and escalation of critical technical issues. Prepares user and systems documentation as needed. Identifies and recommends Industry best practices. Serves as a mentor to junior staff. Acts as a technical lead/mentor for developers in day to day and overall project areas. Ability to lead a team of agile developers. Worked in a complex deadline driven projects with minimal supervision. Ability to architect/design/develop with minimum requirements by effectively coordinating activities between business analysts, scrum leads, developers and managers. Ability to provide agile status notes on day to day project tasks. Key Responsibilities: Data Lakehouse Development: Design and implement scalable data Lakehouse solutions using Databricks and Delta Lake . Develop and maintain Delta Lake tables with ACID transactions and schema evolution. Data Ingestion & Autoloaders: Build ingestion pipelines using Databricks Autoloader for real-time and batch data. Integrate data from various sources including cloud storage (e.g., ADLS, S3), databases, APIs, and streaming platforms. ETL & Data Enrichment: Develop ETL workflows to cleanse, transform, and enrich raw data. Implement business logic and data quality checks to ensure reliability and accuracy. Performance Optimization: Optimize Spark jobs for performance and cost-efficiency. Monitor and troubleshoot data pipelines using Databricks tools and logging frameworks. Data Access & Governance: Enable secure and governed access to data using Unity Catalog or other access control mechanisms. Collaborate with data analysts and scientists to ensure data availability and usability. Collaboration & Documentation: Work closely with cross-functional teams including data architects, analysts, and business stakeholders. Document data models, pipeline designs, and operational procedures. Technical Skills: Design and implement robust ETL pipelines using Databricks notebooks and workflows. Strong proficiency in Apache Spark, PySpark, and Databricks. Hands-on experience with Delta Lake, Autoloader, and Structured Streaming. Proficiency in SQL, Python, and cloud platforms (Azure, AWS, or GCP). Experience with ETL frameworks, data modeling, and data warehousing concepts. Familiarity with CI/CD, Git, and DevOps practices in data engineering. Knowledge of data governance, security, and compliance standards. Optimize Spark jobs for performance and cost-efficiency. Develop and manage job orchestration strategies using Databricks Jobs and Workflows. Monitor and troubleshoot production jobs, ensuring reliability and data quality. Implement security and governance best practices including access control and encryption. Strong Practical experience using Scrum, Agile modelling and adaptive software development. Ability to understand and grasp the big picture of system components. Experience building environment and architecture and design guides and architecture and application blueprints. Strong understanding of data modeling, warehousing, and performance tuning. Excellent problem-solving and communication skills. Core/Must have skills: Oracle, SQL, PLSQL, Python, Scala, Apache Spark, Spark Streaming, CI CD pipeline, AWS cloud experience Exposure to real-time data processing and event-driven architectures . Good to have skills Databricks certifications (e.g., Data Engineer Associate/Professional). Experience with Unity Catalog , MLflow , or Delta Live Tables Work Schedule: 12 PM IST to 9 PM (IST) Why this role is important to us: Our technology function, Global Technology Services (GTS), is vital to State Street and is the key enabler for our business to deliver data and insights to our clients. We're driving the company's digital transformation and expanding business capabilities using industry best practices and advanced technologies such as cloud, artificial intelligence and robotics process automation. We offer a collaborative environment where technology skills and innovation are valued in a global organization. We're looking for top technical talent to join our team and deliver creative technology solutions that help us become an end-to-end, next-generation financial services company. Join us if you want to grow your technical skills, solve real problems and make your mark on our industry. About State Street: What we do. State Street is one of the largest custodian banks, asset managers and asset intelligence companies in the world. From technology to product innovation, we're making our mark on the financial services industry. For more than two centuries, we've been helping our clients safeguard and steward the investments of millions of people. We provide investment servicing, data & analytics, investment research & trading and investment management to institutional clients. Work, Live and Grow. We make all efforts to create a great work environment. Our benefits packages are competitive and comprehensive. Details vary by location, but you may expect generous medical care, insurance and savings plans, among other perks. You'll have access to flexible Work Programs to help you match your needs. And our wealth of development programs and educational support will help you reach your full potential. Inclusion, Diversity and Social Responsibility. We truly believe our employees diverse backgrounds, experiences and perspectives are a powerful contributor to creating an inclusive environment where everyone can thrive and reach their maximum potential while adding value to both our organization and our clients. We warmly welcome candidates of diverse origin, background, ability, age, sexual orientation, gender identity and personality. Another fundamental value at State Street is active engagement with our communities around the world, both as a partner and a leader. You will have tools to help balance your professional and personal life, paid volunteer days, matching gift programs and access to employee networks that help you stay connected to what matters to you. State Street is an equal opportunity and affirmative action employer. Discover more at StateStreet.com/careers
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
navi mumbai, maharashtra
On-site
The ideal candidate will be responsible for designing and implementing streaming data pipelines that integrate Kafka with Databricks using Structured Streaming. You will also be tasked with architecting and maintaining the Medallion Architecture, which consists of well-defined Bronze, Silver, and Gold layers. Additionally, you will need to implement efficient data ingestion processes using Databricks Autoloader for high-throughput data loads. You will work with large volumes of structured and unstructured data to ensure high availability and performance, applying performance tuning techniques like partitioning, caching, and cluster resource optimization. Collaboration with cross-functional teams, including data scientists, analysts, and business users, is essential to build robust data solutions. The role also involves establishing best practices for code versioning, deployment automation, and data governance. The required technical skills for this position include strong expertise in Azure Databricks and Spark Structured Streaming, along with at least 7 years of experience in Data Engineering. You should be familiar with processing modes (append, update, complete), output modes (append, complete, update), checkpointing, and state management. Experience with Kafka integration for real-time data pipelines, a deep understanding of Medallion Architecture, proficiency with Databricks Autoloader and schema evolution, and familiarity with Unity Catalog and Foreign catalog are also necessary. Strong knowledge of Spark SQL, Delta Lake, and DataFrames, expertise in performance tuning, data management strategies, governance, access management, data modeling, data warehousing concepts, and Databricks as a platform, as well as a solid understanding of Window functions will be beneficial in this role.,
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
navi mumbai, maharashtra
On-site
As a Senior/Lead Data Engineer in our team based in Mumbai, IN, you will be responsible for leveraging your 6-8 years of IT experience, with at least 5+ years specifically in Data Engineering. Your expertise will be utilized in various areas including Kafka integration to Databricks, understanding of Structured Streaming concepts such as Processing modes, output modes, and checkpointing. Familiarity with Medallion Architecture (Bronze, Silver, Gold layers) and Databricks Autoloader will be key aspects of your role. Moreover, your experience in working with large volumes of data and implementing performance optimization techniques like partitioning, caching, and cluster tuning will be crucial for success in this position. Additionally, your ability to effectively engage with clients and your excellent communication skills will play a vital role in delivering high-quality solutions. If you are looking for a challenging opportunity where you can apply your Data Engineering skills to drive impactful results and work in a dynamic environment, we encourage you to apply for this role and be a part of our innovative team.,
Posted 2 weeks ago
4.0 - 9.0 years
9 - 19 Lacs
Noida, Hyderabad, Pune
Work from Office
Overview: As a Data Engineer, you will work with multiple teams to deliver solutions on the AWS Cloud using core cloud data engineering tools such as Databricks on AWS, AWS Glue, Amazon Redshift, Athena, and other Big Data-related technologies. This role focuses on building the next generation of application-level data platforms and improving recent implementations. Hands-on experience with Apache Spark (PySpark, SparkSQL), Delta Lake, Iceberg, and Databricks is essential Responsibilities: Define, design, develop, and test software components/applications using AWS-native data services: Databricks on AWS, AWS Glue, Amazon S3, Amazon Redshift, Athena, AWS Lambda, Secrets Manager Build and maintain ETL/ELT pipelines for both batch and streaming data. Work with structured and unstructured datasets at scale. Apply Data Modeling principles and advanced SQL techniques. Implement and manage pipelines using Apache Spark (PySpark, SparkSQL) and Delta Lake/Iceberg formats. Collaborate with product teams to understand requirements and deliver optimized data solutions. Utilize CI/CD pipelines with DBX and AWS for continuous delivery and deployment of Databricks code. Work independently with minimal supervision and strong ownership of deliverables. Must Have: 4+ years of experience in Data Engineering on AWS Cloud. Hands-on expertise in: o Apache Spark (PySpark, SparkSQL) o Delta Lake / Iceberg formats o Databricks on AWS o AWS Glue, Amazon Athena, Amazon Redshift • Strong SQL skills and performance tuning experience on large datasets. • Good understanding of CI/CD pipelines, especially using DBX and AWS tools. • Experience with environment setup, cluster management, user roles, and authentication in Databricks. • Certified as a Databricks Certified Data Engineer Professional (mandatory) Good To Have: • Experience migrating ETL pipelines from on-premise or other clouds to AWS Databricks. • Experience with Databricks ML or Spark 3.x upgrades. • Familiarity with Airflow, Step Functions, or other orchestration tools. • Experience integrating Databricks with AWS services in a secured, production-ready environment. • Experience with monitoring and cost optimization in AWS. Key Skills: • Languages: Python, SQL, PySpark • Big Data Tools: Apache Spark, Delta Lake, Iceberg • Databricks on AWS • AWS Services: AWS Glue, Athena, Redshift, Lambda, S3, Secrets Manager • Version Control & CI/CD: Git, DBX, AWS CodePipeline/CodeBuild • Other: Data Modeling, ETL Methodology, Performance Optimizatio
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |