Jobs
Interviews

271 Data Engineer Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

7 - 17 Lacs

Mumbai, Navi Mumbai, Mumbai (All Areas)

Work from Office

Role & responsibilities Strong, hands-on proficiency with Snowflake: In-depth knowledge of Snowflake architecture, features (e.g., Snowpipe, Tasks, Streams, Time Travel, Zero-Copy Cloning). Experience in designing and implementing Snowflake data models (schemas, tables, views). Expertise in writing and optimizing complex SQL queries in Snowflake. Experience with data loading and unloading techniques in Snowflake. Solid experience with AWS Cloud services: Proficiency in using AWS S3 for data storage, staging, and as a landing zone for Snowflake. Experience with other relevant AWS services (e.g., IAM for security, Lambda for serverless processing, Glue for ETL - if applicable). Strong experience in designing and building ETL/ELT data pipelines.

Posted 1 month ago

Apply

3.0 - 8.0 years

3 - 5 Lacs

Hyderabad

Work from Office

Key Skills: Data Engineer, Python. Roles and Responsibilities: Develop and maintain scalable data pipelines using Python and PySpark. Design and implement data lake and data warehouse solutions to support business intelligence and analytics needs. Work extensively on the Databricks platform for data processing and transformation. Write complex SQL queries and build efficient data models to support analytics and reporting. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions. Ensure data quality, consistency, and reliability across various sources and destinations. Troubleshoot and resolve issues in data ingestion, transformation, and delivery processes. Lead and mentor junior data engineers, ensuring adherence to best practices and coding standards. Experience Requirement: 3-8 years of experience with data warehousing and data lake architectures. Extensive hands-on experience with the Databricks platform. Proven expertise in SQL and data modeling. Strong proficiency in Python and PySpark. Excellent problem-solving and analytical skills. Demonstrated experience in leading and mentoring teams. Education: Any Graduation.

Posted 1 month ago

Apply

7.0 - 12.0 years

40 - 45 Lacs

Bengaluru

Hybrid

Role & responsibilities Data engineer with architect level experience in ETL, AWS (Glue), Pyspark, Python etc Preferred candidate profile Immediate joiners who can work on Contract basis If you are interested please share your updated CV at pavan.teja@careernet.in

Posted 1 month ago

Apply

5.0 - 9.0 years

25 - 35 Lacs

Kochi, Chennai, Bengaluru

Work from Office

Experience Data Engineer ((Python, PySpark, ADB,ADF, Azure, Snowflake) Data science can also apply

Posted 1 month ago

Apply

3.0 - 6.0 years

10 - 15 Lacs

Hyderabad

Remote

Roles and Responsibilities: We are looking for an experienced AWS Cloud Data Engineer to join our Data Science & Analytics team to build, optimize, and maintain cloud-based data solutions. The ideal candidate will possess strong technical knowledge in data engineering on AWS, expertise in data integration, pipeline creation, performance optimization, and a strong understanding of DevOps methodologies. Design, develop, and deploy scalable, high-performance data pipelines on AWS and scalable AWS infrastructure solutions. Implement data solutions utilizing AWS services such as S3, Glue, Redshift, EMR, Athena, and Kinesis. Optimize data storage, processing, and query performance to ensure efficiency and reliability. Maintain and enhance ETL processes, including data extraction, transformation, and loading using AWS Glue and Lambda. Ensure data quality, security, compliance, and governance are integrated throughout data workflows. Collaborate closely with data scientists, analysts, and application developers to meet data needs. Monitor and troubleshoot data pipelines and infrastructure proactively. Document data architectures, processes, and standard operating procedures. Document cloud architectures, procedures, and processes clearly and comprehensively. Required Qualifications: Bachelors degree in Computer Science, IT, or a related technical field. 3-5+ years of experience working as a Data Engineer, particularly with AWS cloud infrastructure. AWS Certified Data Engineering - Specialty or similar certifications preferred. Proficiency in AWS data services including S3, Glue, Lambda, Redshift, Athena, Kinesis, and EMR. Strong expertise in building data pipelines using Python, PySpark, or SQL. Experience with big data technologies and frameworks (e.g., Hadoop, Spark). Demonstrable skills with infrastructure-as-code tools like Terraform or CloudFormation. Experience with containerization technologies like Docker and Kubernetes Preferred Qualifications: Familiarity with data lake architectures and Lakehouse implementations. Knowledge of container technologies (Docker, Kubernetes). Experience with data visualization and reporting tools (QuickSight, Tableau, Power BI). Understanding of DevOps methodologies, CI/CD pipelines, and Agile development practices. Competencies: Analytical mindset with a keen attention to detail. Strong problem-solving and troubleshooting capabilities. Excellent collaboration and communication skills. Proactive learner with a commitment to continuous professional development.

Posted 1 month ago

Apply

3.0 - 6.0 years

10 - 15 Lacs

Chennai

Remote

Roles and Responsibilities: We are looking for an experienced AWS Cloud Data Engineer to join our Data Science & Analytics team to build, optimize, and maintain cloud-based data solutions. The ideal candidate will possess strong technical knowledge in data engineering on AWS, expertise in data integration, pipeline creation, performance optimization, and a strong understanding of DevOps methodologies. Design, develop, and deploy scalable, high-performance data pipelines on AWS and scalable AWS infrastructure solutions. Implement data solutions utilizing AWS services such as S3, Glue, Redshift, EMR, Athena, and Kinesis. Optimize data storage, processing, and query performance to ensure efficiency and reliability. Maintain and enhance ETL processes, including data extraction, transformation, and loading using AWS Glue and Lambda. Ensure data quality, security, compliance, and governance are integrated throughout data workflows. Collaborate closely with data scientists, analysts, and application developers to meet data needs. Monitor and troubleshoot data pipelines and infrastructure proactively. Document data architectures, processes, and standard operating procedures. Document cloud architectures, procedures, and processes clearly and comprehensively. Required Qualifications: Bachelors degree in Computer Science, IT, or a related technical field. 3-5+ years of experience working as a Data Engineer, particularly with AWS cloud infrastructure. AWS Certified Data Engineering - Specialty or similar certifications preferred. Proficiency in AWS data services including S3, Glue, Lambda, Redshift, Athena, Kinesis, and EMR. Strong expertise in building data pipelines using Python, PySpark, or SQL. Experience with big data technologies and frameworks (e.g., Hadoop, Spark). Demonstrable skills with infrastructure-as-code tools like Terraform or CloudFormation. Experience with containerization technologies like Docker and Kubernetes Preferred Qualifications: Familiarity with data lake architectures and Lakehouse implementations. Knowledge of container technologies (Docker, Kubernetes). Experience with data visualization and reporting tools (QuickSight, Tableau, Power BI). Understanding of DevOps methodologies, CI/CD pipelines, and Agile development practices. Competencies: Analytical mindset with a keen attention to detail. Strong problem-solving and troubleshooting capabilities. Excellent collaboration and communication skills. Proactive learner with a commitment to continuous professional development.

Posted 1 month ago

Apply

3.0 - 6.0 years

10 - 15 Lacs

Bengaluru

Remote

Roles and Responsibilities: We are looking for an experienced AWS Cloud Data Engineer to join our Data Science & Analytics team to build, optimize, and maintain cloud-based data solutions. The ideal candidate will possess strong technical knowledge in data engineering on AWS, expertise in data integration, pipeline creation, performance optimization, and a strong understanding of DevOps methodologies. Design, develop, and deploy scalable, high-performance data pipelines on AWS and scalable AWS infrastructure solutions. Implement data solutions utilizing AWS services such as S3, Glue, Redshift, EMR, Athena, and Kinesis. Optimize data storage, processing, and query performance to ensure efficiency and reliability. Maintain and enhance ETL processes, including data extraction, transformation, and loading using AWS Glue and Lambda. Ensure data quality, security, compliance, and governance are integrated throughout data workflows. Collaborate closely with data scientists, analysts, and application developers to meet data needs. Monitor and troubleshoot data pipelines and infrastructure proactively. Document data architectures, processes, and standard operating procedures. Document cloud architectures, procedures, and processes clearly and comprehensively. Required Qualifications: Bachelors degree in Computer Science, IT, or a related technical field. 3-5+ years of experience working as a Data Engineer, particularly with AWS cloud infrastructure. AWS Certified Data Engineering - Specialty or similar certifications preferred. Proficiency in AWS data services including S3, Glue, Lambda, Redshift, Athena, Kinesis, and EMR. Strong expertise in building data pipelines using Python, PySpark, or SQL. Experience with big data technologies and frameworks (e.g., Hadoop, Spark). Demonstrable skills with infrastructure-as-code tools like Terraform or CloudFormation. Experience with containerization technologies like Docker and Kubernetes Preferred Qualifications: Familiarity with data lake architectures and Lakehouse implementations. Knowledge of container technologies (Docker, Kubernetes). Experience with data visualization and reporting tools (QuickSight, Tableau, Power BI). Understanding of DevOps methodologies, CI/CD pipelines, and Agile development practices. Competencies: Analytical mindset with a keen attention to detail. Strong problem-solving and troubleshooting capabilities. Excellent collaboration and communication skills. Proactive learner with a commitment to continuous professional development.

Posted 1 month ago

Apply

5.0 - 10.0 years

18 - 25 Lacs

Bengaluru

Hybrid

Skill required : Data Engineers- Azure Designation : Sr Analyst/ Consultant Job Location : Bengaluru Qualifications: BE/BTech Years of Experience : 4 - 11 Years OVERALL PURPOSE OF JOB Understand client requirements and build ETL solution using Azure Data Factory, Azure Databricks & PySpark . Build solution in such a way that it can absorb clients change request very easily. Find innovative ways to accomplish tasks and handle multiple projects simultaneously and independently. Works with Data & appropriate teams to effectively source required data. Identify data gaps and work with client teams to effectively communicate the findings to stakeholders/clients. Responsibilities : Develop ETL solution to populate Centralized Repository by integrating data from various data sources. Create Data Pipelines, Data Flow, Data Model according to the business requirement. Proficient in implementing all transformations according to business needs. Identify data gaps in data lake and work with relevant data/client teams to get necessary data required for dashboarding/reporting. Strong experience working on Azure data platform, Azure Data Factory, Azure Data Bricks. Strong experience working on ETL components and scripting languages like PySpark, Python . Experience in creating Pipelines, Alerts, email notifications, and scheduling jobs. Exposure on development/staging/production environments. Providing support in creating, monitoring and troubleshooting the scheduled jobs. Effectively work with client and handle client interactions. Skills Required: Bachelors' degree in Engineering or Science or equivalent graduates with at least 4-11 years of overall experience in data management including data integration, modeling & optimization. Minimum 4 years of experience working on Azure cloud, Azure Data Factory, Azure Databricks. Minimum 3-4 years of experience in PySpark, Python, etc. for data ETL . In-depth understanding of data warehouse, ETL concept and modeling principles. Strong ability to design, build and manage data. Strong understanding of Data integration. Strong Analytical and problem-solving skills. Strong Communication & client interaction skills. Ability to design database to store huge data necessary for reporting & dashboarding. Ability and willingness to acquire knowledge on the new technologies, good analytical and interpersonal skills with ability to interact with individuals at all levels.

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Chennai

Work from Office

Architect & Build Scalable Systems: Design and implement a petabyte-scale lakehouse Architectures to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink. Required Candidate profile Data engineering experience with large-scale systems• Expert proficiency in Java for data-intensive applications. Handson experience with lakehouse architectures, stream processing, & event streaming

Posted 1 month ago

Apply

4.0 - 6.0 years

15 - 25 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Warm Greetings from SP Staffing!! Role: AWS Data Engineer Experience Required :4 to 6 yrs Work Location :Bangalore/Pune/Hyderabad/Chennai Required Skills, Pyspark AWS Glue Interested candidates can send resumes to nandhini.spstaffing@gmail.com

Posted 1 month ago

Apply

6.0 - 10.0 years

15 - 20 Lacs

Pune

Work from Office

Education: Bachelors or masters degree in computer science, Information Technology, Engineering, or a related field. Experience: 6-10 years 8+ years of experience in data engineering or a related field. Strong hands-on experience with Azure Databricks , Spark , Python/Scala, CICD, Scripting for data processing. Experience working in multiple file formats like Parquet , Delta , and Iceberg . Knowledge of Kafka or similar streaming technologies for real-time data ingestion. Experience with data governance and data security in Azure. Proven track record of building large-scale data ingestion and ETL pipelines in cloud environments, specifically Azure. Deep understanding of Azure Data Services (e.g., Azure Blob Storage, Azure Data Lake, Azure SQL Data Warehouse, Event Hubs, Functions etc.). Familiarity with data lakes , data warehouses , and modern data architectures. Experience with CI/CD pipelines , version control (Git), Jenkins and agile methodologies. Understanding of cloud infrastructure and architecture principles (especially within Azure ). Technical Skills: Expert-level proficiency in Spark, SPARK Streaming , including optimization, debugging, and troubleshooting Spark jobs. Solid knowledge of Azure Databricks for scalable, distributed data processing. Strong coding skills in Python and Scala for data processing. Experience working with SQL , especially for large datasets. Knowledge of data formats like Iceberg , Parquet , ORC , and Delta Lake . Leadership Skills: Proven ability to lead and mentor a team of data engineers, ensuring adherence to best practices. Excellent communication skills, capable of interacting with both technical and non-technical stakeholders. Strong problem-solving, analytical, and troubleshooting abilities.

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Pune

Hybrid

Role & responsibilities Designed and implemented end-to-end data pipeline using DBT, Snowflake Created and structure DBT models like staging, transformation, marts, YAML configurations for models and tests, dbt seeds. Hands-on experience on DBT Jinja templating, macro development, dbt jobs and snapshot management for Slowly changing dimensions. Develop python script for data cleaning, transformation and automation of repetitive task. Experienced in loading structured and semi-structured data from AWS S3 to Snowflake by designing file formats, configuring storage integration, and automating data loads using Snow pipe. Designed scalable incremental models for handling large datasets, reducing resource usage Preferred candidate profile Candidate must have 5+ Yrs experience. Early joiner, who can join within a month

Posted 1 month ago

Apply

8.0 - 13.0 years

20 - 25 Lacs

Hyderabad

Work from Office

Bachelors degree in computer science, engineering, or a related field. Master’s degree preferred. Data: 5+ years of experience with data analytics and data warehousing. Sound knowledge of data warehousing concepts. SQL: 5+ years of hands-on experience on SQL and query optimization for data pipelines. ELT/ETL: 5+ years of experience in Informatica/ 3+ years of experience in IICS/IDMC Migration Experience: Experience Informatica on prem to IICS/IDMC migration Cloud: 5+ years’ experience working in AWS cloud environment Python: 5+ years of hands-on experience of development with Python Workflow: 4+ years of experience in orchestration and scheduling tools (e.g. Apache Airflow) Advanced Data Processing: Experience using data processing technologies such as Apache Spark or Kafka Troubleshooting: Experience with troubleshooting and root cause analysis to determine and remediate potential issues Communication: Excellent communication, problem-solving and organizational and analytical skills Able to work independently and to provide leadership to small teams of developers. Reporting: Experience with data reporting (e.g. MicroStrategy, Tableau, Looker) and data cataloging tools (e.g. Alation) Experience in Design and Implementation of ETL solutions with effective design and optimized performance, ETL Development with industry standard recommendations for jobs recovery, fail over, logging, alerting mechanisms. Role & responsibilities Preferred candidate profile

Posted 1 month ago

Apply

7.0 - 12.0 years

16 - 27 Lacs

Hyderabad

Work from Office

Job Description Data Engineer We are seeking a highly skilled Data Engineer with extensive experience in Snowflake, Data Build Tool (dbt), Snaplogic, SQL Server, PostgreSQL, Azure Data Factory, and other ETL tools. The ideal candidate will have a strong ability to optimize SQL queries and a good working knowledge of Python. A positive attitude and excellent teamwork skills are essential. Role & responsibilities Data Pipeline Development: Design, develop, and maintain scalable data pipelines using Snowflake, DBT, Snaplogic, and ETL tools. SQL Optimization: Write and optimize complex SQL queries to ensure high performance and efficiency. Data Integration: Integrate data from various sources, ensuring consistency, accuracy, and reliability. Database Management: Manage and maintain SQL Server and PostgreSQL databases. ETL Processes: Develop and manage ETL processes to support data warehousing and analytics. Collaboration: Work closely with data analysts, data scientists, and business stakeholders to understand data requirements and deliver solutions. Documentation: Maintain comprehensive documentation of data models, data flows, and ETL processes. Troubleshooting: Identify and resolve data-related issues and discrepancies. Python Scripting: Utilize Python for data manipulation, automation, and integration tasks. Preferred candidate profile Proficiency in Snowflake, DBT, Snaplogic, SQL Server, PostgreSQL, and Azure Data Factory. Strong SQL skills with the ability to write and optimize complex queries. Knowledge of Python for data manipulation and automation. Knowledge of data governance frameworks and best practices Soft Skills: Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Positive attitude and ability to work well in a team environment. Certifications: Relevant certifications (e.g., Snowflake, Azure) are a plus. Please forward your updated profiles to the below mentioned Email Address: divyateja.s@prudentconsulting.com

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Pune, Chennai, Bengaluru

Hybrid

Our client is Global IT Service & Consulting Organization Exp-5+ yrs Skil Apache Spark Location- Bangalore, Hyderabad, Pune, Chennai, Coimbatore, Gr. Noida Excellent Knowledge on Spark; The professional must have a thorough understanding Spark framework, Performance Tuning etc Excellent Knowledge and hands-on experience of at least 4+ years in Scala or PySpark Excellent Knowledge of the Hadoop eco System- Knowledge of Hive mandatory Strong Unix and Shell Scripting Skills Excellent Inter-personal skills and for experienced candidates Excellent leadership skills Mandatory for anyone to have Good knowledge of any of the CSPs like Azure,AWS or GCP; Certifications on Azure will be additional Pl

Posted 1 month ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Hyderabad, Pune, Chennai

Work from Office

Very strong in python, pyspark and SQL. Good experience in any cloud . They use AWS but any cloud experience is ok. They will train on other things but if candidates have experience with ETL (like AWS Airflow), datalakes like Snowflake

Posted 1 month ago

Apply

5.0 - 7.0 years

10 - 15 Lacs

Hyderabad

Work from Office

Role description Were looking for a driven, organized team member to support the Digital & Analytics team with support of Talent transformation projects from both a systems and process perspective. The role will primarily provide PMO support. The individual will need to demonstrate strong project management skills, be very collaborative and detail oriented to coordinate meetings, track and update project plans, risks, and decision logs. This individual would also need to create project materials, support design sessions, user acceptance testing, and escalate project or system issues as needed. Work youll do As the Talent PMO Support you will: Support the Digital & Analytics Manager with support of Talent transformation projects. Track and drive closure of action items and open decisions. Schedule follow up calls, take notes, and distribute action items from discussions. Coordinate with Talent process owners and subject matter advisors to manage change requests and risks, actions, and decisions. Coordinate across Talent, Technology, and Consulting teams to track and escalate issues as appropriate. Update the Talent project plan items, resource tracker and the risks, actions and decisions log as needed. Leverage shared project team site and OneNote notebook to ensure structure and access to communications, materials, and documents for all project team members. Support testing, cut-over, training, and service rehearsal testing processes as needed. Collaborate with the Consulting, Technology, and Talent team members to ensure project deliverables move forward. Qualifications: Bachelors degree and 5-7 years of relevant work experience Background and experience with project management support to implement Talent processes, from ideation through deployment phases. Strong written/verbal executive communication and presentation skills; strong listening, facilitation and influencing skills with audiences of all management and leadership levels. Works well in a dynamic, complex, client, and team focused environment with minimal oversight and an agile mindset Excited by prospect of working in a developing, ambiguous, and challenging situation. Proficient Microsoft Office skills (e.g., PowerPoint, Excel, OneNote, Word, Teams)

Posted 1 month ago

Apply

5.0 - 9.0 years

13 - 22 Lacs

Hyderabad

Hybrid

Key Responsibilities: 1. Design, build, and deploy new data pipelines within our Big Data Eco-Systems using Streamsets/Talend/Informatica BDM etc. Document new/existing pipelines, Datasets. 2. Design ETL/ELT data pipelines using StreamSets, Informatica or any other ETL processing engine. Familiarity with Data Pipelines, Data Lakes and modern Data Warehousing practices (virtual data warehouse, push down analytics etc.) 3. Expert level programming skills on Python 4. Expert level programming skills on Spark 5. Cloud Based Infrastructure: GCP 6. Experience with one of the ETL Informatica, StreamSets in creation of complex parallel loads, Cluster Batch Execution and dependency creation using Jobs/Topologies/Workflows etc., 7. Experience in SQL and conversion of SQL stored procedures into Informatica/StreamSets, Strong exposure working with web service origins/targets/processors/executors, XML/JSON Sources and Restful APIs. 8. Strong exposure working with relation databases DB2, Oracle & SQL Server including complex SQL constructs and DDL generation. 9. Exposure to Apache Airflow for scheduling jobs 10. Strong knowledge of Big data Architecture (HDFS), Cluster installation, configuration, monitoring, cluster security, cluster resources management, maintenance, and performance tuning 11. Create POCs to enable new workloads and technical capabilities on the Platform. 12. Work with the platform and infrastructure engineers to implement these capabilities in production. 13. Manage workloads and enable workload optimization including managing resource allocation and scheduling across multiple tenants to fulfill SLAs. 14. Participate in planning activities, Data Science and perform activities to increase platform skills Key Requirements: 1. Minimum 6 years of experience in ETL/ELT Technologies, preferably StreamSets/Informatica/Talend etc., 2. Minimum of 6 years hands-on experience with Big Data technologies e.g. Hadoop, Spark, Hive. 3. Minimum 3+ years of experience on Spark 4. Minimum 3 years of experience in Cloud environments, preferably GCP 5. Minimum of 2 years working in a Big Data service delivery (or equivalent) roles focusing on the following disciplines: 6. Any experience with NoSQL and Graph databases 7. Informatica or StreamSets Data integration (ETL/ELT) 8. Exposure to role and attribute based access controls 9. Hands on experience with managing solutions deployed in the Cloud, preferably on GCP 10. Experience working in a Global company, working in a DevOps model is a plus

Posted 1 month ago

Apply

5.0 - 9.0 years

13 - 20 Lacs

Bangalore Rural, Bengaluru

Hybrid

Job Title: Data Engineer Company : Aqilea India(Client : H&M India) Employment Type: Full Time Location: Bangalore(Hybrid) Experience: 4.5 to 9 years Client : H&M India At H&M, we welcome you to be yourself and feel like you truly belong. Help us reimagine the future of an entire industry by making everyone look, feel, and do good. We take pride in our history of making fashion accessible to everyone and led by our values we strive to build a more welcoming, inclusive, and sustainable industry. We are privileged to have more than 120,000 colleagues, in over 75 countries across the world. Thats 120 000 individuals with unique experiences, skills, and passions. At H&M, we believe everyone can make an impact, we believe in giving people responsibility and a strong sense of ownership. Our business is your business, and when you grow, we grow. Website : https://career.hm.com/ We are seeking a skilled and forward-thinking Data Engineer to join our Emerging Tech team. This role is designed for someone passionate about working with cutting-edge technologies such as AI, machine learning, IoT, and big data to turn complex data sets into actionable insights. As the Data Engineer in Emerging Tech, you will be responsible for designing, implementing, and optimizing data architectures and processes that support the integration of next-generation technologies. Your role will involve working with large-scale datasets, building predictive models, and utilizing emerging tools to enable data-driven decision-making across the business. You ll collaborate with technical and business teams to uncover insights, streamline data pipelines, and ensure the best use of advanced analytics technologies. Key Responsibilities: Design and build scalable data architectures and pipelines that support machine learning, analytics, and IoT initiatives. Develop and optimize data models and algorithms to process and analyse large-scale, complex data sets. Implement data governance, security, and compliance measures to ensure high-quality Collaborate with cross-functional teams (engineering, product, and business) to translate business requirements into data-driven solutions. Evaluate, integrate, and optimize new data technologies to enhance analytics capabilities and drive business outcomes. Apply statistical methods, machine learning models, and data visualization techniques to deliver actionable insights. Establish best practices for data management, including data quality, consistency, and scalability. Conduct analysis to identify trends, patterns, and correlations within data to support strategic business initiatives. Stay updated on the latest trends and innovations in data technologies and emerging data management practices. Skills Required : Bachelors or masters degree in data science, Computer Science, Engineering, Statistics, or a related field. 4.5-9 years of experience in data engineering , data science, or a similar analytical role, with a focus on emerging technologies. Proficiency with big data frameworks (e.g., Hadoop, Spark, Kafka) and experience with modern cloud platforms (AWS, Azure, or GCP). Solid skills in Python, SQL, and optionally R, along with experience using machine learning libraries such as Scikit-learn, TensorFlow, or PyTorch. Experience with data visualization tools (e.g., Tableau or Power BI or D3.js) to communicate insights effectively. Familiarity with IoT and edge computing data architectures is a plus. Understanding of data governance, compliance, and privacy standards. Ability to work with both structured and unstructured data. Excellent problem-solving, communication, and collaboration skills, with the ability to work in a fast-paced, cross-functional team environment. A passion for emerging technologies and a continuous desire to learn and innovate. Interested Candidates can share your Resumes to mail id karthik.prakadish@aqilea.com

Posted 1 month ago

Apply

8.0 - 13.0 years

18 - 33 Lacs

Bengaluru

Hybrid

Warm Greetings from SP Staffing!! Role: AWS Data Engineer Experience Required :8 to 15 yrs Work Location :Bangalore Required Skills, Technical knowledge of data engineering solutions and practices. Implementation of data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Proficient in Python and Spark, with a focus on ETL data processing and data engineering practices. Interested candidates can send resumes to nandhini.spstaffing@gmail.com

Posted 1 month ago

Apply

6.0 - 11.0 years

11 - 21 Lacs

Kolkata, Pune, Chennai

Work from Office

Role & responsibilities Data Engineer, Expertise in AWS, Databricks and Pyspark

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 18 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Work from Office

Role & responsibilities Data Engineer, Expertise in AWS, Databricks and Pyspark

Posted 1 month ago

Apply

9.0 - 14.0 years

15 - 20 Lacs

Hyderabad

Work from Office

Job Description: SQL & Database Management: Deep knowledge of relational databases (PostgreSQL), cloud-hosted data platforms (AWS, Azure, GCP), and data warehouses like Snowflake . ETL/ELT Tools: Experience with SnapLogic, StreamSets, or DBT for building and maintaining data pipelines. / ETL Tools Extensive Experience on data Pipelines Data Modeling & Optimization: Strong understanding of data modeling, OLAP systems, query optimization, and performance tuning. Cloud & Security: Familiarity with cloud platforms and SQL security techniques (e.g., data encryption, TDE). Data Warehousing: Experience managing large datasets, data marts, and optimizing databases for performance. Agile & CI/CD: Knowledge of Agile methodologies and CI/CD automation tools. Role & responsibilities Build the data pipeline for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and cloud database technologies. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data needs. Work with data and analytics experts to strive for greater functionality in our data systems. Assemble large, complex data sets that meet functional / non-functional business requirements. – Ability to quickly analyze existing SQL code and make improvements to enhance performance, take advantage of new SQL features, close security gaps, and increase robustness and maintainability of the code. – Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery for greater scalability, etc. – Unit Test databases and perform bug fixes. – Develop best practices for database design and development activities. – Take on technical leadership responsibilities of database projects across various scrum teams. Manage exploratory data analysis to support dashboard development (desirable) Required Skills: – Strong experience in SQL with expertise in relational database(PostgreSQL preferrable cloud hosted in AWS/Azure/GCP) or any cloud-based Data Warehouse (like Snowflake, Azure Synapse). – Competence in data preparation and/or ETL/ELT tools like SnapLogic, StreamSets, DBT, etc. (preferably strong working experience in one or more) to build and maintain complex data pipelines and flows to handle large volume of data. – Understanding of data modelling techniques and working knowledge with OLAP systems – Deep knowledge of databases, data marts, data warehouse enterprise systems and handling of large datasets. – In-depth knowledge of ingestion techniques, data cleaning, de-dupe, etc. – Ability to fine tune report generating queries. – Solid understanding of normalization and denormalization of data, database exception handling, profiling queries, performance counters, debugging, database & query optimization techniques. – Understanding of index design and performance-tuning techniques – Familiarity with SQL security techniques such as data encryption at the column level, Transparent Data Encryption(TDE), signed stored procedures, and assignment of user permissions – Experience in understanding the source data from various platforms and mapping them into Entity Relationship Models (ER) for data integration and reporting(desirable). – Adhere to standards for all database e.g., Data Models, Data Architecture and Naming Conventions – Exposure to Source control like GIT, Azure DevOps – Understanding of Agile methodologies (Scrum, Itanban) – experience with NoSQL database to migrate data into other type of databases with real time replication (desirable). – Experience with CI/CD automation tools (desirable) – Programming language experience in Golang, Python, any programming language, Visualization tools (Power BI/Tableau) (desirable).

Posted 1 month ago

Apply

5.0 - 7.0 years

15 - 22 Lacs

Chennai

Work from Office

Role & responsibilities : Job Description: Primarily looking for a Data Engineer (AWS) with expertise in processing data pipelines using Data bricks, PySpark SQL on Cloud distributions like AWS Must have AWS Data bricks ,Good-to-have PySpark, Snowflake, Talend Requirements- • Candidate must be experienced working in projects involving • Other ideal qualifications include experiences in • Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. • Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python • Familiarity with AWS compute storage and IAM concepts • Experience in working with S3 Data Lake as the storage tier • Any ETL background Talend AWS Glue etc. is a plus but not required • Cloud Warehouse experience Snowflake etc. is a huge plus • Carefully evaluates alternative risks and solutions before taking action. • Optimizes the use of all available resources • Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit • Skills • Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. • Experience on Shell scripting • Exceptionally strong analytical and problem-solving skills • Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses • Strong experience with relational databases and data access methods especially SQL • Excellent collaboration and cross functional leadership skills • Excellent communication skills both written and verbal • Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment • Ability to leverage data assets to respond to complex questions that require timely answers • has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Mandatory Skills: Apache Spark, Databricks, Java, Python, Scala, Spark SQL. Note : Need only Immediate joiners/ Serving notice period. Interested candidates can apply. Regards, HR Manager

Posted 1 month ago

Apply

10.0 - 12.0 years

25 - 30 Lacs

Pune, Mumbai (All Areas)

Hybrid

Design and implement state-of-the-art NLP models, including but not limited to text classification, semantic search, sentiment analysis, named entity recognition, and summary generation. conduct data preprocessing, and feature engineering to improve model accuracy and performance. Stay updated with the latest developments in NLP and ML, and integrate cutting-edge techniques into our solutions. collaborate with Cross-Functional Teams: Work closely with data scientists, software engineers, and product managers to align NLP projects with business objectives. deploy models into production environments and monitor their performance to ensure robustness and reliability. maintain comprehensive documentation of processes, models, and experiments, and report findings to stakeholders. implement and deliver high quality software solutions / components for the Credit Risk monitoring platform. leverage his/her expertise to mentor developers; review code and ensure adherence to standards. apply a broad range of software engineering practices, from analyzing user needs and developing new features to automated testing and deployment ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements build observability into our solutions, monitor production health, help to resolve incidents, and remediate the root cause of risks and issues understand, represent, and advocate for client needs share knowledge and expertise with colleagues , help with hiring, and contribute regularly to our engineering culture and internal communities. Expertise - Bachelor of Engineering or equivalent. Ideally 8-10Yrs years of experience in NLP based applications focused on Banking / Finance sector. Preference for experience in financial data extraction and classification. Interested in learning new technologies and practices, reuse strategic platforms and standards, evaluate options, and make decisions with long-term sustainability in mind. Proficiency in programming languages such as Python & Java. Experience with frameworks like TensorFlow, PyTorch, or Keras. In-depth knowledge of NLP techniques and tools, including spaCy, NLTK, and Hugging Face. Experience with data handling and processing tools like Pandas, NumPy, and SQL. Prior experience in agentic AI, LLMs ,prompt engineering and generative AI is a plus. Backend development and microservices using Java Spring Boot, J2EE, REST for implementing projects with high SLA of data availability and data quality. Experience of building cloud ready and migrating applications using Azure and understanding of the Azure Native Cloud services, software design and enterprise integration patterns. Knowledge of SQL and PL/SQL (Oracle) and UNIX, writing queries, packages, working with joins, partitions, looking at execution plans, and tuning queries. A real passion for and experience of Agile working practices, with a strong desire to work with baked in quality subject areas such as TDD, BDD, test automation and DevOps principles Experience in Azure development including Databricks , Azure Services , ADLS etc. Experience using DevOps toolsets like GitLab, Jenkins

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies