Jobs
Interviews

6 Glue Catalog Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

About the Team As a part of the DoorDash organization, you will be joining a data-driven team that values timely, accurate, and reliable data to make informed business and product decisions. Data serves as the foundation of DoorDash's success, and the Data Engineering team is responsible for building database solutions tailored to various use cases such as reporting, product analytics, marketing optimization, and financial reporting. By implementing robust data structures and data warehouse architecture, this team plays a crucial role in facilitating decision-making processes at DoorDash. Additionally, the team focuses on enhancing the developer experience by developing tools that support the organization's high-velocity demands. About the Role DoorDash is seeking a dedicated Data Engineering Manager to lead the development of enterprise-scale data solutions. In this role, you will serve as a technical expert on all aspects of data architecture, empowering data engineers, data scientists, and DoorDash partners. Your responsibilities will include fostering a culture of engineering excellence, enabling engineers to deliver reliable and flexible solutions at scale. Furthermore, you will be instrumental in building and nurturing a high-performing team, driving innovation and success in a dynamic and fast-paced environment. In this role, you will: - Lead and manage a team of data engineers, focusing on hiring, building, growing, and nurturing impactful business-focused data teams. - Drive the technical and strategic vision for embedded pods and foundational enablers to meet current and future scalability and interoperability needs. - Strive for continuous improvement of data architecture and development processes. - Balance quick wins with long-term strategy and engineering excellence, breaking down large systems into user-friendly data assets and reusable components. - Collaborate cross-functionally with stakeholders, external partners, and peer data leaders. - Utilize effective planning and execution tools to ensure short-term and long-term team and stakeholder success. - Prioritize reliability and quality as essential components of data solutions. Qualifications: - Bachelor's, Master's, or Ph.D. in Computer Science or equivalent field. - Over 10 years of experience in data engineering, data platform, or related domains. - Minimum of 2 years of hands-on management experience. - Strong communication and leadership skills, with a track record of hiring and growing teams in a fast-paced environment. - Proficiency in programming languages such as Python, Kotlin, and SQL. - Prior experience with technologies like Snowflake, Databricks, Spark, Trino, and Pinot. - Familiarity with the AWS ecosystem and large-scale batch/real-time ETL orchestration using tools like Airflow, Kafka, and Spark Streaming. - Knowledge of data lake file formats including Delta Lake, Apache Iceberg, Glue Catalog, and S3. - Proficiency in system design and experience with AI solutions in the data space. At DoorDash, we are dedicated to fostering a diverse and inclusive community within our company and beyond. We believe that innovation thrives in an environment where individuals from diverse backgrounds, experiences, and perspectives come together. We are committed to providing equal opportunities for all and creating an inclusive workplace where everyone can excel and contribute to our collective success.,

Posted 4 days ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

As an AWS Dataops Lead at Birlasoft, you will be responsible for configuring, deploying, monitoring, and managing AWS data platforms. Your role will involve managing data flows and dispositions in S3, Snowflake, and Postgres. You will also be in charge of user access and authentication on AWS, ensuring proper resource provisioning, security, and compliance. Your experience in GitHub integration will be valuable in this role. Additionally, familiarity with AWS native tools like Glue, Glue Catalog, CloudWatch, and CloudFormation (or Terraform) will be essential. You will also play a key part in assisting with backup and disaster recovery processes. Join our team and be a part of Birlasoft's commitment to leveraging Cloud, AI, and Digital technologies to empower societies worldwide and enhance business efficiency and productivity. With over 12,000 professionals and a rich heritage spanning 170 years, we are dedicated to building sustainable communities and driving innovation through our consultative and design-thinking approach.,

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

chennai, tamil nadu

On-site

Are you a skilled Data Architect with a passion for tackling intricate data challenges from various structured and unstructured sources Do you excel in crafting micro data lakes and spearheading data strategies at an enterprise level If this sounds like you, we are eager to learn more about your expertise. In this role, you will be responsible for designing and constructing tailored micro data lakes specifically catered to the lending domain. Your tasks will include defining and executing enterprise data strategies encompassing modeling, lineage, and governance. You will play a crucial role in architecting robust data pipelines for both batch and real-time data ingestion, as well as devising strategies for extracting, transforming, and storing data from diverse sources like APIs, PDFs, logs, and databases. Furthermore, you will be instrumental in establishing best practices related to data quality, metadata management, and data lifecycle control. Your hands-on involvement in implementing processes, strategies, and tools will be pivotal in creating innovative products. Collaboration with engineering and product teams to align data architecture with overarching business objectives will be a key aspect of your role. To excel in this position, you should bring to the table over 10 years of experience in data architecture and engineering. A deep understanding of both structured and unstructured data ecosystems is essential, along with practical experience in ETL, ELT, stream processing, querying, and data modeling. Proficiency in tools and languages such as Spark, Kafka, Airflow, SQL, Amundsen, Glue Catalog, and Python is a must. Additionally, expertise in cloud-native data platforms like AWS, Azure, or GCP is highly desirable, along with a solid foundation in data governance, privacy, and compliance standards. While exposure to the lending domain, ML pipelines, or AI integrations is considered advantageous, a background in fintech, lending, or regulatory data environments is also beneficial. This role offers you the chance to lead data-first transformation, develop products that drive AI adoption, and the autonomy to design, build, and scale modern data architecture. You will be part of a forward-thinking, collaborative, and tech-driven culture with access to cutting-edge tools and technologies in the data ecosystem. If you are ready to shape the future of data with us, we encourage you to apply for this exciting opportunity based in Chennai. Join us in redefining data architecture and driving innovation in the realm of structured and unstructured data sources.,

Posted 2 weeks ago

Apply

10.0 - 15.0 years

0 Lacs

chennai, tamil nadu

On-site

Are you a hands-on Data Architect who excels at tackling intricate data challenges within structured and unstructured sources Are you passionate about crafting micro data lakes and spearheading enterprise-wide data strategies If this resonates with you, we are eager to learn more about your expertise. In this role, you will be responsible for designing and constructing tailored micro data lakes specific to the lending domain. Additionally, you will play a key role in defining and executing enterprise data strategies encompassing modeling, lineage, and governance. Your tasks will involve architecting and implementing robust data pipelines for both batch and real-time data ingestion, as well as devising strategies for extracting, transforming, and storing data from various sources such as APIs, PDFs, logs, and databases. Establishing best practices for data quality, metadata management, and data lifecycle control will also be part of your core responsibilities. Collaboration with engineering and product teams to align data architecture with business objectives will be crucial, as well as evaluating and integrating modern data platforms and tools like Databricks, Spark, Kafka, Snowflake, AWS, GCP, and Azure. Furthermore, you will mentor data engineers and promote engineering excellence in data practices. The ideal candidate for this role should possess a minimum of 10 years of experience in data architecture and engineering, along with a profound understanding of structured and unstructured data ecosystems. Hands-on proficiency in ETL, ELT, stream processing, querying, and data modeling is essential, as well as expertise in tools and languages such as Spark, Kafka, Airflow, SQL, Amundsen, Glue Catalog, and Python. Familiarity with cloud-native data platforms like AWS, Azure, or GCP is required, alongside a solid foundation in data governance, privacy, and compliance standards. A strategic mindset coupled with the ability to execute hands-on tasks when necessary is highly valued. While exposure to the lending domain, ML pipelines, or AI integrations is considered advantageous, a background in fintech, lending, or regulatory data environments is also beneficial. As part of our team, you will have the opportunity to lead data-first transformation and develop products that drive AI adoption. You will enjoy the autonomy to design, build, and scale modern data architecture within a forward-thinking, collaborative, and tech-driven culture. Additionally, you will have access to the latest tools and technologies in the data ecosystem. Location: Chennai Experience: 10-15 Years | Full-Time | Work From Office If you are ready to shape the future of data alongside us, we invite you to apply now and embark on this exciting journey!,

Posted 2 weeks ago

Apply

7.0 - 12.0 years

15 - 25 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

Job Summary: We are seeking a highly motivated and experienced Senior Data Engineer to join our team. This role requires a deep curiosity about our business and a passion for technology and innovation. You will be responsible for designing and developing robust, scalable data engineering solutions that drive our business intelligence and data-driven decision-making processes. If you thrive in a dynamic environment and have a strong desire to deliver top-notch data solutions, we want to hear from you. Key Responsibilities: Collaborate with agile teams to design and develop cutting-edge data engineering solutions. Build and maintain distributed, low-latency, and reliable data pipelines ensuring high availability and timely delivery of data. Design and implement optimized data engineering solutions for Big Data workloads to handle increasing data volumes and complexities. Develop high-performance real-time data ingestion solutions for streaming workloads. Adhere to best practices and established design patterns across all data engineering initiatives. Ensure code quality through elegant design, efficient coding, and performance optimization. Focus on data quality and consistency by implementing monitoring processes and systems. Produce detailed design and test documentation, including Data Flow Diagrams, Technical Design Specs, and Source to Target Mapping documents. Perform data analysis to troubleshoot and resolve data-related issues. Automate data engineering pipelines and data validation processes to eliminate manual interventions. Implement data security and privacy measures, including access controls, key management, and encryption techniques. Stay updated on technology trends, experimenting with new tools, and educating team members. Collaborate with analytics and business teams to improve data models and enhance data accessibility. Communicate effectively with both technical and non-technical stakeholders. Qualifications: Education: Bachelors degree in Computer Science, Computer Engineering, or a related field. Experience: Minimum of 5+ years in architecting, designing, and building data engineering solutions and data platforms. Proven experience in building Lakehouse or Data Warehouses on platforms like Databricks or Snowflake. Expertise in designing and building highly optimized batch/streaming data pipelines using Databricks. Proficiency with data acquisition and transformation tools such as Fivetran and DBT. Strong experience in building efficient data engineering pipelines using Python and PySpark. Experience with distributed data processing frameworks such as Apache Hadoop, Apache Spark, or Flink. Familiarity with real-time data stream processing using tools like Apache Kafka, Kinesis, or Spark Structured Streaming. Experience with various AWS services, including S3, EC2, EMR, Lambda, RDS, DynamoDB, Redshift, and Glue Catalog. Expertise in advanced SQL programming and performance tuning. Key Skills: Strong problem-solving abilities and perseverance in the face of ambiguity. Excellent emotional intelligence and interpersonal skills. Ability to build and maintain productive relationships with internal and external stakeholders. A self-starter mentality with a focus on growth and quick learning. Passion for operational products and creating outstanding employee experiences.

Posted 1 month ago

Apply

9.0 - 14.0 years

25 - 40 Lacs

Bengaluru

Hybrid

Greetings from tsworks Technologies India Pvt . We are hiring for Sr. Data Engineer - Snowflake with AWS If you are interested, please share your CV to mohan.kumar@tsworks.io Position: Senior Data Engineer Experience: 9+ Years Location: Bengaluru, India (Hybrid) Mandatory Required Qualification Strong proficiency in AWS data services such as S3 buckets, Glue and Glue Catalog, EMR, Athena, Redshift, DynamoDB, Quick Sight, etc. Strong hands-on experience building Data Lake-House solutions on Snowflake, and using features such as streams, tasks, dynamic tables, data masking, data exchange etc. Hands-on experience using scheduling tools such as Apache Airflow, DBT, AWS Step Functions and data governance products such as Collibra Expertise in DevOps and CI/CD implementation Excellent Communication Skills In This Role, You Will Design, implement, and manage scalable and efficient data architecture on the AWS cloud platform. Develop and maintain data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Perform complex data transformations and processing using PySpark (AWS Glue, EMR or Databricks), Snowflake's data processing capabilities, or other relevant tools. Hands-on experience working with Data Lake solutions such as Apache Hudi, Delta Lake or Iceberg. Develop and maintain data models within Snowflake and related tools to support reporting, analytics, and business intelligence needs. Collaborate with cross-functional teams to understand data requirements and design appropriate data integration solutions. Integrate data from various sources, both internal and external, ensuring data quality and consistency. Skills & Knowledge Bachelor's degree in computer science, Engineering, or a related field. 9 + Years of experience in Information Technology, designing, developing and executing solutions. 4+ Years of hands-on experience in designing and executing data solutions on AWS and Snowflake cloud platforms as a Data Engineer. Strong proficiency in AWS services such as Glue, EMR, Athena, Databricks, with file formats such as Parquet and Avro. Hands-on experience in data modelling, batch and real-time pipelines, using Python, Java or JavaScript and experience working with Restful APIs are required. Hands-on experience in handling real-time data streams from Kafka or Kinesis is required. Expertise in DevOps and CI/CD implementation. Hands-on experience with SQL and NoSQL databases. Hands-on experience in data modelling, implementation, and management of OLTP and OLAP systems. Knowledge of data quality, governance, and security best practices. Familiarity with machine learning concepts and integration of ML pipelines into data workflows Hands-on experience working in an Agile setting. Is self-driven, naturally curious, and able to adapt to a fast-paced work environment. Can articulate, create, and maintain technical and non-technical documentation. AWS and Snowflake Certifications are preferred.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies