Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
8.0 - 13.0 years
25 - 40 Lacs
Chennai
Work from Office
Architect & Build Scalable Systems: Design and implement a petabyte-scale lakehouse Architectures to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink. Required Candidate profile Data engineering experience with large-scale systems• Expert proficiency in Java for data-intensive applications. Handson experience with lakehouse architectures, stream processing, & event streaming
Posted 5 days ago
6.0 - 11.0 years
25 - 30 Lacs
Bengaluru
Hybrid
Mandatory Skills : Data engineer , AWS Athena, AWS Glue,Redshift,Datalake,Lakehouse,Python,SQL Server Must Have Experience: 6+ years of hands-on data engineering experience Expertise with AWS services: S3, Redshift, EMR, Glue, Kinesis, DynamoDB Building batch and real-time data pipelines Python, SQL coding for data processing and analysis Data modeling experience using Cloud based data platforms like Redshift, Snowflake, Databricks Design and Develop ETL frameworks Nice-to-Have Experience : ETL development using tools like Informatica, Talend, Fivetran Creating reusable data sources and dashboards for self-service analytics Experience using Databricks for Spark workloads or Snowflake Working knowledge of Big Data Processing CI/CD setup Infrastructure-as-code implementation Any one of the AWS Professional Certification
Posted 1 week ago
4.0 - 8.0 years
5 - 12 Lacs
Bengaluru
Work from Office
If interested apply here - https://forms.gle/sBcZaUXpkttdrTtH9 Key Responsibilities Work with Product Owners and various stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions and design the scale out architecture for data platform to meet the requirements of the proposed solution. Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques, and business strategies. Play an active role in leading team meetings and workshops with clients. Help the Data Engineering team produce high-quality code that allows us to put solutions into production Create and own the technical product backlogs for data projects, help the team to close the backlogs in right time. Help us to shape the next generation of our products. Assess the effectiveness and accuracy of new data sources and data gathering techniques. Lead data mining and collection procedures Ensure data quality and integrity Interpret and analyze data problems Develop custom data models and algorithms to apply to data set Coordinate with different functional teams to implement models and monitor outcomes Develop processes and tools to monitor and analyze model performance and data accuracy Responsible to understand the client requirement and architect robust data platform on multiple cloud technologies. Responsible for creating reusable and scalable data pipelines Work with DE/DA/ETL/QA/Application and various other teams to remove roadblocks Align data projects with organizational goals. Skills & Qualifications Were looking for someone with 4-7 years of experience having worked through large data engineering porjects Bachelors or Masters degree in Computer Science, Engineering, Data Science, or a related field. Strong problem-solving skills with an emphasis on product development Domain - Big Data, Data Platform, Distributed Systems Coding - any language (Java/scala/python) (most import requirement) with strong knowledge of Spark Ingestion skills - one of apache storm, flink, spark Streaming skills - one of kafka, kinesis, oplogs, binlogs, debizium Database skills – HDFS, Delta Lake/Iceberg, Lakehouse If interested apply here - https://forms.gle/sBcZaUXpkttdrTtH9
Posted 3 weeks ago
3 - 6 years
10 - 18 Lacs
Hyderabad
Hybrid
Primary Responsibilities: Querying data structures within Databricks Lakehouse (Bronze, Silver, and Gold) Interim reporting from legacy SQL Server platform (in transition) Collaboration with global engineering team supporting Genoa Data & Reporting Reporting with Power BI and Fabric Learning about the pharmacy business and how it maps to Genoa data Production support of Genoa data warehouse platform on SQL Server Assist with migration of data from SQL Server on prem to Azure Databricks Lakehouse Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Undergraduate degree or equivalent experience Databricks experience, preferably with Lakehouse Repot building experience with Power BI Proven curious nature, willingness to grow and learn Proven desire to be a supportive teammate Preferred Qualification: AWS Glue, Fabric, SQL Server (legacy platform), Azure Data Factory
Posted 2 months ago
3 - 6 years
15 - 20 Lacs
Hyderabad
Work from Office
Job Overview The person will be responsible for expanding and optimizing our data and data pipeline architecture. The ideal candidate is an experienced data pipeline builder who enjoys optimizing data systems and building them from the ground up. Youll be Responsible for ? Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and Cloud technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. What Youd have? We are looking for a candidate with 3+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools: Experience with data pipeline and workflow management tools: Apache Airflow, NiFi, Talend etc. Experience with relational SQL and NoSQL databases, including Clickhouse, Postgres and MySQL. Experience with stream-processing systems: Storm, Spark-Streaming, Kafka etc. Experience with object-oriented/object function scripting languages: Python, Scala, etc. Experience building and optimizing data pipelines, architectures and data sets. Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Strong analytic skills related to working with unstructured datasets. Build processes supporting data transformation, data structures, metadata, dependency and workload management. Working knowledge of message queuing, stream processing, and highly scalable data stores Why join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment : Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees www.tanla.com
Posted 2 months ago
6 - 10 years
8 - 13 Lacs
Bengaluru
Work from Office
Mandatory Skills : Data engineer , AWS Athena, AWS Glue,Redshift,Datalake,Lakehouse,Python,SQL Server Must Have Experience: 6+ years of hands-on data engineering experience Expertise with AWS services: S3, Redshift, EMR, Glue, Kinesis, DynamoDB Building batch and real-time data pipelines Python, SQL coding for data processing and analysis Data modeling experience using Cloud based data platforms like Redshift, Snowflake, Databricks Design and Develop ETL frameworks Nice-to-Have Experience: ETL development using tools like Informatica, Talend, Fivetran Creating reusable data sources and dashboards for self-service analytics Experience using Databricks for Spark workloads or Snowflake Working knowledge of Big Data Processing CI/CD setup Infrastructure-as-code implementation Any one of the AWS Professional Certification
Posted 3 months ago
10 - 15 years
40 - 60 Lacs
Hyderabad
Hybrid
Strong experience in building Data Lakes using AWS Cloud Platforms Stack.Proficiency with AWS technologies such as S3, EC2, Glue/Lake Formation (or EMR), Quicksight, Redshift, Athena, Airflow (or) Lambda + Step Functions + Event Bridge, Data and IAM.
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2