Jobs
Interviews

453 Data Engineer Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

30 - 45 Lacs

Bengaluru

Work from Office

Lead Data Engineer - What You Will Do: As a PR3 Lead Data Engineer, you will be instrumental in driving our data strategy, ensuring data quality, and leading the technical execution of a small, impactful team. Your responsibilities will include: Team Leadership: Establish the strategic vision for the evolution of our data products and our technology solutions, then provide technical leadership and guidance for a small team of Data Engineers in executing the roadmap. Champion and enforce best practices for data quality, governance, and architecture within your team's work. Embody a product mindset over the teams data. Oversee the team’s use of Agile methodologies (e.g., Scrum, Kanban), ensuring smooth and predictable delivery, and overtly focusing on continuous improvement. Data Expertise & Domain Knowledge: Actively seek out, propose, and implement cutting-edge approaches to data transfer, transformation, analytics, and data warehousing to drive innovation. Design and implement scalable, robust, and high-quality ETL processes to support growing business demand for information, delivering data as a reliable service that directly influences decision making. Develop a profound understanding and "feel" for the business meaning, lineage, and context of each data field within our domain. Communication & Stakeholder Partnership: Collaborate with other engineering teams and business partners, proactively managing dependencies and holding them accountable for their contributions to ensure successful project delivery. Actively engage with data consumers to achieve deep understanding of their specific data usage, pain points, and current gaps, then plan initiatives to implement improvements collaboratively. Clearly articulate project goals, technical strategies, progress, challenges, and business value to both technical and non-technical audiences. Produce clear, concise, and comprehensive documentation. Your Qualifications: At Vista, we value the experience and potential that individual team members add to our culture. Please don’t hesitate to apply even if you don’t meet the exact qualifications, we look forward to learning more about you! Bachelor's or Master's degree in computer science, data engineering, or a related field . 10+ years of professional experience, with at least 6 years of hands-on Data Engineering, specifically in e-commerce or direct to consumer, and 4 years of team leadership Demonstrated experience in leading a team of data engineers, providing technical guidance, and coordinating project execution Stakeholder management experience and excellent communication skills Strong knowledge of SQL and data warehousing concepts is a must Strong knowledge of Data Modeling concepts and hands-on experience designing complex multi-dimension data models Strong hands-on experience in designing and managing scalable ETL pipelines in cloud environments with large volume datasets (both structured/unstructured data) Proficiency with cloud services in AWS (Preferred), including S3, EMR, RDS, Step Functions, Fargate, Glue etc. Critical hands-on experience with cloud-based data platforms (Snowflake strongly preferred) Data Visualization experience with reporting and data tools (preferably Looker with LookML skills) Coding mastery in at least one modern programming language: Python (strongly preferred), Java, Golang, PySpark, etc. Strong knowledge in production standards such as versioning, CI/CD, data quality, documentation, automation, etc. Problem solving and multi-tasking ability in a fast-paced, globally distributed environment Nice To Have: Experience with API development on enterprise platforms, with GraphQL APIs being a clear plus Hands-on experience designing DBT data pipelines Knowledge of finance, accounting, supply chain, logistics, operations, procurement data is a plus Experience managing work in Jira and writing documentation in Confluence Proficiency in AWS account management, including IAM, infrastructure, and monitoring for health, security and cost optimization Experience with Gen AI/ML tools for enhancing data pipelines or automating analysis. Why You'll Love Working Here There is a lot to love about working at Vista. We are an award winning Remote-First company. We’re an inclusive community. We’re growing (which means you can too). And to help orient us all in the same direction, we have our Vista Behaviors which exemplify the behavioral attributes that make us a culturally strong and high-performing team. Our Team: Enterprise Business Solutions Vistas Enterprise Business Solutions (EBS) domain is working to make our company one of the most data-driven organizations to support Finance, Supply Chain, and HR functions. The cross-functional team includes product owners, analysts, technologists, data engineers and more – all focused on providing Vista with cutting-edge tools and data we can use to deliver jaw-dropping customer value. EBS team members are empowered to learn new skills, communicate openly, and be active problem-solvers. Join our EBS Domain as a Lead Data Engineer! This Lead level within the organization will be responsible for the work of a small team of data engineers, focusing not only on implementations but also operations and support. The Lead Data Engineer will implement best practices, data standards, and reporting tools. The role will oversee and manage the work of other data engineers as well as being an individual contributor. This role has a lot of opportunity to impact general ETL development and implementation of new solutions. We will look to the Lead Data Engineer to modernize data technology solutions in EBS, including the opportunity to work on modern warehousing, finance, and HR datasets and integration technologies. This role will require an in-depth understanding of cloud data integration tools and cloud data warehousing, with a strong and pronounced ability to lead and execute initiatives to tangible results.

Posted 2 months ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

About Mindstix Software Labs: Mindstix accelerates digital transformation for the world's leading brands. We are a team of passionate innovators specialized in Cloud Engineering, DevOps, Data Science, and Digital Experiences. Our UX studio and modern-stack engineers deliver world-class products for our global customers that include Fortune 500 Enterprises and Silicon Valley startups. Our work impacts a diverse set of industries - eCommerce, Luxury Retail, ISV and SaaS, Consumer Tech, and Hospitality. A fast-moving open culture powered by curiosity and craftsmanship. A team committed to bold thinking and innovation at the very intersection of business, technology, and design. That's our DNA. Roles and Responsibilities: Mindstix is looking for a proficient Data Engineer. You are a collaborative person who takes pleasure in finding solutions to issues that add to the bottom line. You appreciate technical work by hand and feel a sense of ownership. You require a keen eye for detail, work experience as a data analyst, and in-depth knowledge of widely used databases and technologies for data analysis. Your responsibilities include: - Building outstanding domain-focused data solutions with internal teams, business analysts, and stakeholders. - Applying data engineering practices and standards to develop robust and maintainable solutions. - Being motivated by a fast-paced, service-oriented environment and interacting directly with clients on new features for future product releases. - Being a natural problem-solver and intellectually curious across a breadth of industries and topics. - Being acquainted with different aspects of Data Management like Data Strategy, Architecture, Governance, Data Quality, Integrity & Data Integration. - Being extremely well-versed in designing incremental and full data load techniques. Qualifications and Skills: - Bachelors or Master's degree in Computer Science, Information Technology, or allied streams. - 2+ years of hands-on experience in the data engineering domain with DWH development. - Must have experience with end-to-end data warehouse implementation on Azure or GCP. - Must have SQL and PL/SQL skills, implementing complex queries and stored procedures. - Solid understanding of DWH concepts such as OLAP, ETL/ELT, RBAC, Data Modelling, Data Driven Pipelines, Virtual Warehousing, and MPP. - Expertise in Databricks - Structured Streaming, Lakehouse Architecture, DLT, Data Modeling, Vacuum, Time Travel, Security, Monitoring, Dashboards, DBSQL, and Unit Testing. - Expertise in Snowflake - Monitoring, RBACs, Virtual Warehousing, Query Performance Tuning, and Time Travel. - Understanding of Apache Spark, Airflow, Hudi, Iceberg, Nessie, NiFi, Luigi, and Arrow (Good to have). - Strong foundations in computer science, data structures, algorithms, and programming logic. - Excellent logical reasoning and data interpretation capability. - Ability to interpret business requirements accurately. - Exposure to work with multicultural international customers. - Experience in the Retail/ Supply Chain/ CPG/ EComm/Health Industry is a plus. Who Fits Best - You are a data enthusiast and problem solver. - You are a self-motivated and fast learner with a strong sense of ownership and drive. - You enjoy working in a fast-paced creative environment. - You appreciate great design, have a strong sense of aesthetics and have a keen eye for detail. - You thrive in a customer-centric environment with the ability to actively listen, empathize and collaborate with globally distributed teams. - You are a team player who desires to mentor and inspire others to do their best. - You love expressing ideas and articulating well with strong written and verbal English communication and presentation skills. - You are detail-oriented with an appreciation for craftsmanship. Benefits: - Flexible working environment. - Competitive compensation and perks. - Health insurance coverage. - Accelerated career paths. - Rewards and recognition. - Sponsored certifications. - Global customers. - Mentorship by industry leaders. Location: This position is primarily based at our Pune (India) headquarters, requiring all potential hires to work from this location. A modern workplace is deeply collaborative by nature, while also demanding a touch of flexibility. We embrace deep collaboration at our offices with reasonable flexi-timing and hybrid options to our seasoned team members. Equal Opportunity Employer.,

Posted 2 months ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

As a Lead Software Engineer at Mastercard, you will play a crucial role in the data science and artificial intelligence initiatives that drive our digital transformation forward. Your expertise will guide complex projects from conception to execution, supporting our aggressive growth plans and contributing to the evolution of our data science and AI strategy. Your responsibilities will include providing technical vision and leadership, engaging in prioritization discussions with product and business stakeholders, estimating and managing delivery tasks across the entire development lifecycle, automating software operations, and facilitating code and design decisions within your team. You will be responsible for reporting status, managing risks, driving service integration for enhanced customer experience, conducting demos and acceptance discussions, and ensuring a deep understanding of the technical architecture and dependency systems. In addition, you will be expected to explore new tools and technologies, drive the adoption of technology standards and frameworks, mentor and guide team members, identify process improvements, and promote knowledge sharing within the Guild/Program to enhance productivity and reuse of best practices. To excel in this role, you should hold a Bachelor's degree in computer science, software engineering, or a related field, with at least 8 years of experience in software engineering combined with exposure to data science and machine learning. Proficiency in programming languages such as Python, Java, or Scala, along with frameworks like Pandas and Spring Boot, is essential. Prior experience with MLOps, familiarity with Neural Networks and LLMs, and understanding of operating system internals are highly valued skills. Your ability to debug, troubleshoot, implement standard branching and CI/CD practices, collaborate effectively, and communicate with stakeholders will be crucial. Experience with cloud platforms like AWS, Azure, and Databricks is a plus. If you are an experienced software engineer with a passion for data science and AI, and possess the skills and mindset to drive innovation and excellence in a fast-paced environment, we welcome you to join our dynamic team at Mastercard.,

Posted 2 months ago

Apply

7.0 - 12.0 years

8 - 16 Lacs

Bengaluru

Remote

Key role responsibilities: Develop and implement data pipelines and systems to connect and process data for analytics and business intelligence (BI) platforms. Document systems and source-to-target mappings to ensure transparency and a clear understanding of data flow. Re-engineer manual data flows to enable scalability, automation, and efficiency for repeatable use. Adhere to and contribute to best practice guidelines, continuously striving for optimization and improvement. Write clean, secure, and well-tested code, ensuring reliability, maintainability, and compliance with development standards. Monitor and operate the services and pipelines you build, proactively identifying and resolving production issues. Assess and prioritize feature requests based on business needs, technical feasibility, and impact. Identify opportunities to optimize existing data flows, promoting efficiency and reducing redundancy. Collaborate closely with team members and stakeholders to align efforts and achieve shared objectives. Implement data quality checks and validation processes to ensure accuracy and resolve data inconsistencies. Requirements and Skills: Strong background in Software Engineering, with proficiency in Python development (3+ years of experience). Excellent problem-solving, communication, and organizational skills. Ability to work independently and collaboratively within a team environment. Understanding of industry-recognized data modelling patterns and standards, and their practical application. Familiarity with data security and privacy principles, ensuring compliance with governance and regulatory requirements. Proficiency in SQL, with experience in PostgreSQL database management. Experience in API implementation and integration, with an understanding of REST principles and best practices. Knowledge of validation libraries like Marshmallow or Pydantic. Expertise in Pandas, Polars, or similar libraries for data manipulation and analysis. Proficiency in workflow orchestration tools like Apache Airflow and Dagster, ensuring efficient data pipeline scheduling and execution. Experience working with Apache Iceberg, enabling optimized data management and storage within large-scale analytics environments. Understanding of data lake architectures, leveraging scalable storage solutions for structured and unstructured data. Familiarity with data warehouse solutions, ensuring efficient data processing, query performance, and analytics workflows. Knowledge of operating systems (Linux) and modern development practices, including infrastructure deployment (DevOps). Proficiency in code versioning tools such as Git/GitHub, and experience with CI/CD pipelines (e.g., CircleCI).

Posted 2 months ago

Apply

6.0 - 11.0 years

18 - 32 Lacs

Hyderabad

Hybrid

Job Title: Senior Data Engineer Python, PySpark, AWS Experience Required: 6 to 12 Years Location: Hyderabad Job Type: Full Time / Permanent Job Description: We are looking for a passionate and experienced Senior Data Engineer to join our team in Hyderabad . The ideal candidate should have a strong background in data engineering on AWS , with hands-on expertise in Python, PySpark, and AWS services to build and maintain scalable data pipelines and ETL workflows. Mandatory Skills: Data Engineering Python PySpark AWS Services (S3, Glue, Lambda, Redshift, RDS, EC2, Data Pipeline) Key Responsibilities: Design and implement robust, scalable data pipelines using PySpark , AWS Glue , and AWS Data Pipeline . Develop and maintain efficient ETL workflows to handle large-scale data processing. Automate data workflows and job orchestration using AWS Data Pipeline Ensure smooth data integration across services like S3 , Redshift , and RDS . Optimize data processing for performance and cost efficiency on the cloud. Work with various file formats like CSV, Parquet, and Avro. Technical Requirements: 8+ years of experience in Data Engineering , particularly in cloud-based environments . Proficient in Python and PySpark for data transformation and manipulation. Strong experience with AWS Glue for ETL development, Data Catalog, and Crawlers. Solid knowledge of SQL for querying structured and semi-structured data. Familiar with Data Lake architectures , Amazon EMR , and Kinesis . Experience with Docker , Git , and CI/CD pipelines for deployment and versioning Interested Candidates can also share their CV at akanksha.s@esolglobal.com

Posted 2 months ago

Apply

6.0 - 11.0 years

25 - 30 Lacs

Mumbai, Mumbai Suburban, Mumbai (All Areas)

Work from Office

Experience in using SQL, PL/SQL or T-SQL with RDBMSs like Teradata, MS SQL Server, or Oracle in production environments. Experience with Python, ADF,Azure,Data Ricks. Experience working of Microsoft Azure/AWS or other leading cloud platforms Required Candidate profile Hands-on experience with Hadoop, Spark, Hive, or similar frameworks. Data Integration & ETL Data Modelling Database management Data warehousing Big-data framework CI/CD Perks and benefits To be disclosed post interview

Posted 2 months ago

Apply

7.0 - 12.0 years

17 - 27 Lacs

Hyderabad

Work from Office

Job Title: Data Quality Engineer Mandatory Skills Data Engineer, Python, AWS, SQL, Glue, Lambda, S3, SNS, ML, SQS Job Summary: We are seeking a highly skilled Data Engineer (SDET) to join our team, responsible for ensuring the quality and reliability of complex data workflows, data migrations, and analytics solutions across both cloud and on-premises environments. The ideal candidate will have extensive experience in SQL, Python, AWS, and ETL testing, along with a strong background in data quality assurance, data science platforms, DevOps pipelines, and automation frameworks. This role involves close collaboration with business analysts, developers, and data architects to support end-to-end testing,data validation, and continuous integration for data products. Expertise in tools like Redshift, EMR,Athena, Jenkins, and various ETL platforms is essential, as is experience with NoSQL databases, big data technologies, and cloud-native testing strategies. Role and Responsibilities: Work with business stakeholders, Business Systems Analysts and Developers to ensure quality delivery of software. Interact with key business functions to confirm data quality policies and governed attributes. Follow quality management best practices and processes to bring consistency and completeness to integration service testing. Designing and managing the testing AWS environments of data workflows during development and deployment of data products Provide assistance to the team in Test Estimation & Test Planning Design, development of Reports and dashboards. Analyzing and evaluating data sources, data volume, and business rules. Proficiency with SQL, familiarity with Python, Scala, Athena, EMR, Redshift and AWS. No SQL data and unstructured data experience. Extensive experience in programming tools like Map Reduce to HIVEQL. Experience in data science platforms like SageMaker/Machine Learning Studio/ H2O. Should be well versed with the Data flow and Test Strategy for Cloud/ On Prem ETL Testing. Interpret and analyses data from various source systems to support data integration and data reporting needs. Experience in testing Database Application to validate source to destination data movement and transformation. Work with team leads to prioritize business and information needs. Develop complex SQL scripts (Primarily Advanced SQL) for Cloud ETL and On prem. Develop and summarize Data Quality analysis and dashboards. Knowledge of Data modeling and Data warehousing concepts with emphasis on Cloud/ On Prem ETL. Execute testing of data analytic and data integration on time and within budget. Work with team leads to prioritize business and information needs Troubleshoot & determine best resolution for data issues and anomalies Experience in Functional Testing, Regression Testing, System Testing, Integration Testing & End to End testing. Has deep understanding of data architecture & data modeling best practices and guidelines for different data and analytic platforms Required Skills and Qualifications: Extensive Experience in Data migration is a must (Teradata to Redshift preferred). Extensive testing Experience with SQL/Unix/Linux scripting is a must. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR, Redshift, AWS, and Cloud Technologies. Experienced in large-scale application development testing Cloud/ On Prem Data warehouse, Data Lake, Data science. Experience with multi-year, large-scale projects. Expert technical skills with hands-on testing experience using SQL queries. Extensive experience with both data migration and data transformation testing. Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive testing Experience with SQL/Unix/Linux. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR , Redshift and AWS and Cloud Technologies. API/Rest Assured automation, building reusable frameworks, and good technical expertise/acumen. Java/Java Script - Implement core Java, Integration, Core Java and API. Functional/UI/ Selenium - BDD/Cucumber, Specflow, Data Validation/Kafka, BigData, also automation experience using Cypress. AWS/Cloud - Jenkins/ Gitlab/ EC2 machine, S3 and building Jenkins and CI/CD pipelines, SouceLabs. Preferred Skills: API/Rest API - Rest API and Micro Services using JSON, SoapUI. Extensive experience in DevOps/Data Ops space. Strong experience in working with DevOps and build pipelines. Strong experience of AWS data services including Redshift, Glue, Kinesis, Kafka (MSK) and EMR/Spark, Sage Maker etc. Experience with technologies like Kubeflow, EKS, Docker. Extensive experience using No SQL data and unstructured data experience like MongoDB, Cassandra, Redis, ZooKeeper. Extensive experience in Map reduce using tools like Hadoop, Hive, Pig, Kafka, S4, Map R. Experience using Jenkins and Gitlab. Experience using both Waterfall and Agile methodologies. Experience in testing storage tools like S3, HDFS. Experience with one or more industry-standard defect or Test Case management Tools. Great communication skills (regularly interacts with cross functional team members).

Posted 2 months ago

Apply

0.0 - 4.0 years

0 Lacs

chennai, tamil nadu

On-site

As an Azure Data Engineer Junior at dotSolved, you will be responsible for designing, implementing, and managing scalable data solutions on Azure. Your primary focus will be on building and maintaining data pipelines, integrating data from various sources, and ensuring data quality and security. Proficiency in Azure services such as Data Factory, Databricks, and Synapse Analytics is essential as you optimize data workflows for analytics and reporting purposes. Collaboration with stakeholders is a key aspect of this role to ensure alignment with business goals and performance standards. Your responsibilities will include designing, developing, and maintaining data pipelines and workflows using Azure services, implementing data integration, transformation, and storage solutions to support analytics and reporting, ensuring data quality, security, and compliance with organizational and regulatory standards, optimizing data solutions for performance, scalability, and cost efficiency, as well as collaborating with cross-functional teams to gather requirements and deliver data-driven insights. This position is based in Chennai and Bangalore, offering you the opportunity to work in a dynamic and innovative environment where you can contribute to the digital transformation journey of enterprises across various industries.,

Posted 2 months ago

Apply

4.0 - 9.0 years

20 - 35 Lacs

Bengaluru

Work from Office

Looking for Data engineer with 4+ yrs exp Skills: Azure functionalities,AWS lambda ,serverless, Python ,API,snowflake Work from office--Bangalore ( Yeshvanthpur)--India

Posted 2 months ago

Apply

10.0 - 17.0 years

20 - 27 Lacs

Hyderabad

Work from Office

Required Skills and Qualifications: Extensive Experience in Data migration is a must (Teradata to Redshift preferred). Extensive testing Experience with SQL/Unix/Linux scripting is a must. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR, Redshift, AWS, and Cloud Technologies. Experienced in large-scale application development testing Cloud/ On Prem Data warehouse, Data Lake, Data science. Experience with multi-year, large-scale projects. Expert technical skills with hands-on testing experience using SQL queries. Extensive experience with both data migration and data transformation testing. Extensive experience DBMS like Oracle, Teradata, SQL Server, DB2, Redshift, Postgres and Sybase. Extensive testing Experience with SQL/Unix/Linux. Extensive experience testing Cloud/On Prem ETL (e.g. Abinitio, Informatica, SSIS, Datastage, Alteryx, Glu). Extensive experience using Python scripting and AWS and Cloud Technologies. Extensive experience using Athena, EMR , Redshift and AWS and Cloud Technologies. API/Rest Assured automation, building reusable frameworks, and good technical expertise/acumen. Java/Java Script - Implement core Java, Integration, Core Java and API. Functional/UI/ Selenium - BDD/Cucumber, Specflow, Data Validation/Kafka, BigData, also automation experience using Cypress. AWS/Cloud - Jenkins/ Gitlab/ EC2 machine, S3 and building Jenkins and CI/CD pipelines, SouceLabs.

Posted 2 months ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad

Hybrid

We are seeking a Lead Snowflake Engineer .The ideal candidate will bring deep technical expertise in Snowflake, hands-on experience with DBT (Data Build Tool), and a collaborative mindset for working across data, analytics, and business teams.

Posted 2 months ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Kochi, Bengaluru

Work from Office

Job Summary: We are seeking a highly skilled and motivated Machine Learning Engineer with a strong foundation in programming and machine learning, hands-on experience with AWS Machine Learning services (especially SageMaker), and a solid understanding of Data Engineering and MLOps practices. You will be responsible for designing, developing, deploying, and maintaining scalable ML solutions in a cloud-native environment. Key Responsibilities: • Design and implement machine learning models and pipelines using AWS SageMaker and related services. • Develop and maintain robust data pipelines for training and inference workflows. • Collaborate with data scientists, engineers, and product teams to translate business requirements into ML solutions. • Implement MLOps best practices including CI/CD for ML, model versioning, monitoring, and retraining strategies. • Optimize model performance and ensure scalability and reliability in production environments. • Monitor deployed models for drift, performance degradation, and anomalies. • Document processes, architectures, and workflows for reproducibility and compliance. Required Skills & Qualifications: • Strong programming skills in Python and familiarity with ML libraries (e.g., scikitlearn, TensorFlow, PyTorch). • Solid understanding of machine learning algorithms, model evaluation, and tuning. • Hands-on experience with AWS ML services, especially SageMaker, S3, Lambda, Step Functions, and CloudWatch. • Experience with data engineering tools (e.g., Apache Airflow, Spark, Glue) and workflow orchestration. Machine Learning Engineer - Job Description • Proficiency in MLOps tools and practices (e.g., MLflow, Kubeflow, CI/CD pipelines, Docker, Kubernetes). • Familiarity with monitoring tools and logging frameworks for ML systems. • Excellent problem-solving and communication skills. Preferred Qualifications: • AWS Certification (e.g., AWS Certified Machine Learning Specialty). • Experience with real-time inference and streaming data. • Knowledge of data governance, security, and compliance in ML systems

Posted 2 months ago

Apply

6.0 - 10.0 years

15 - 30 Lacs

Gurugram

Work from Office

We are specifically looking for candidates with strong SQL skills, along with experience in Snowflake or Looker

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 16 Lacs

Navi Mumbai, Mumbai (All Areas)

Work from Office

Designation : Senior Data Engineer Experience: 5+ Years Location: Navi Mumbai (JUINAGAR) - WFO Immediate Joiners preferred. Interview : Face - 2 - Face (Only 1 Day Process) Job Description We are looking for an experienced and results-driven Senior Data Engineer to join our Data Engineering team . In this role, you will design, develop, and maintain robust data pipelines and infrastructure that enable efficient data flow across our systems. As a senior contributor, you will also help define best practices, mentor junior team members, and contribute to the long-term vision of our data platform. You will work closely with cross-functional teams to deliver reliable, scalable, and high-performance data systems that support critical business intelligence and analytics initiatives. Required Qualifications: Bachelors degree in Computer Science, Information Systems, or a related field; Masters degree is a plus. 5+ years of experience in data warehousing, ETL development, and data modeling. Strong hands-on experience with one or more databases: Snowflake, Redshift, SQL Server, Oracle, Postgres, Teradata, BigQuery. Proficiency in SQL and scripting languages (e.g., Python, Shell). Deep knowledge of data modeling techniques and ETL frameworks. Excellent communication, analytical thinking, and troubleshooting skills. Preferred Qualifications Experience with modern data stack tools like dbt, Fivetran, Stitch, Looker, Tableau,or PowerBI. Knowledge of data lakes, lakehouses, and real-time data streaming (e.g., Kafka). Agile/Scrum project experience and version control using Git. Sincerely, Sonia TS

Posted 2 months ago

Apply

7.0 - 12.0 years

10 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Skill: Data Engineer Experience: 7+ Years Location: Warangal, Bangalore, Chennai, Hyderabad, Mumbai, Pune, Delhi, Noida, Gurgaon, Kolkata, Jaipur, Jodhpur Notice Period: Immediate - 15 Days Job Description: Design & Build Data Pipelines Develop scalable ETL/ELT workflows to ingest, transform, and load data into Snowflake using SQL, Python, or data integration tools. Data Modeling Create and optimize Snowflake schemas, tables, views, and materialized views to support business analytics and reporting needs. Performance Optimization Tune Snowflake compute resources (warehouses), optimize query performance, and manage clustering and partitioning strategies. Data Quality & Validation Security & Access Control Automation & CI/CD Monitoring & Troubleshooting Documentation

Posted 2 months ago

Apply

6.0 - 10.0 years

20 - 25 Lacs

Pune, Bengaluru, Delhi / NCR

Work from Office

Urgently hiring for Data Engineer - AEP for our esteemed client. Location : PAN INDIA Experienced data modelers, SQL, ETL, with some development background to provide defining new data schemas, data ingestion for Adobe Experience Platform customers. Interface directly with enterprise customers and collaborate with internal teams. Must Have : 6-9 years of strong experience with data transformation & ETL on large data sets Experience with designing customer centric datasets (i.e., CRM, Call Center, Marketing, Offline, Point of Sale etc.) 4+ years of Data Modeling experience (i.e., Relational, Dimensional, Columnar, Big Data) 5+ years of complex SQL experience Experience in advanced Data Warehouse concepts Experience in industry ETL tools (i.e., Informatica, Unifi) Experience with Business Requirements definition and management, structured analysis, process design, use case documentation Demonstrate exceptional organizational skills and ability to multi-task simultaneous different customer projects Strong verbal & written communication skills to interface with Sales team & lead customers to successful outcome Must be self-managed, proactive and customer focused Degree in Computer Science, Information Systems, Data Science, or related field Special Consideration given for - Experience & knowledge with Adobe Experience Cloud solutions Experience & knowledge with Digital Analytics or Digital Marketing Experience in programming languages (Python, Java, or Bash scripting) Experience with Big Data technologies (i.e., Hadoop, Spark, Redshift, Snowflake, Hive, Pig etc.) Experience as an enterprise technical or engineer consultant 100% matching profiles can send resume on neha.sahu@sonyocareers.com

Posted 2 months ago

Apply

7.0 - 12.0 years

15 - 27 Lacs

Pune

Hybrid

Notice Period - Immediate joiner Responsibilities Lead, develop and support analytical pipelines to acquire, ingest and process data from multiple sources Debug, profile and optimize integrations and ETL/ELT processes Design and build data models to conform to our data architecture Collaborate with various teams to deliver effective, high value reporting solutions by leveraging an established DataOps delivery methodology Continually recommend and implement process improvements and tools for data collection, analysis, and visualization Address production support issues promptly, keeping stakeholders informed of status and resolutions Partner closely with on and offshore technical resources Provide on-call support outside normal business hours as needed Provide status updates to the stakeholders. Identify obstacles and seek assistance with enough lead time to ensure delivery on time Demonstrate technical ability, thoroughness, and accuracy in all assignments Document and communicate on proper operations, standards, policies, and procedures Keep abreast on all new tools and technologies that are related to our Enterprise data architecture Foster a positive work environment by promoting teamwork and open communication. Skills/Qualifications Bachelors degree in computer science with focus on data engineering preferable. 6+ years of experience in data warehouse development, building and managing data pipelines in cloud computing environments Strong proficiency in SQL and Python Experience with Azure cloud services, including Azure Data Lake Storage, Data Factory, and Databricks Expertise in Snowflake or similar cloud warehousing technologies Experience with GitHub, including GitHub Actions. Familiarity with data visualization tools, such as Power BI or Spotfire Excellent written and verbal communication skills Strong team player with interpersonal skills to interact at all levels Ability to translate technical information for both technical and non-technical audiences Proactive mindset with a sense of urgency and initiative Adaptability to changing priorities and needs If you are interested share your updated resume on mail - recruit5@focusonit.com. Also Request you to please spread this message across your Networks or Contacts.

Posted 2 months ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

As a highly motivated and experienced Data Engineer, you will be responsible for designing, developing, and implementing solutions that enable seamless data integration across multiple cloud platforms. Your expertise in data lake architecture, Iceberg tables, and cloud compute engines like Snowflake, BigQuery, and Athena will ensure efficient and reliable data access for various downstream applications. Your key responsibilities will include collaborating with stakeholders to understand data needs and define schemas, designing and implementing data pipelines for ingesting, transforming, and storing data. You will also be developing data transformation logic to make Iceberg tables compatible with the data access requirements of Snowflake, BigQuery, and Athena, as well as designing and implementing solutions for seamless data transfer and synchronization across different cloud platforms. Ensuring data consistency and quality across the data lake and target cloud environments will be crucial in your role. Additionally, you will be analyzing data patterns and identifying performance bottlenecks in data pipelines, implementing data optimization techniques to improve query performance and reduce data storage costs, and monitoring data lake health to proactively address potential issues. Collaboration and communication with architects, leads, and other stakeholders to ensure data quality meet specific requirements will also be an essential part of your role. To be successful in this position, you should have a minimum of 4+ years of experience as a Data Engineer, strong hands-on experience with data lake architectures and technologies, proficiency in SQL and scripting languages, and experience with data governance and security best practices. Excellent problem-solving and analytical skills, strong communication and collaboration skills, and familiarity with cloud-native data tools and services are also required. Additionally, certifications in relevant cloud technologies will be beneficial. In return, GlobalLogic offers exciting projects in industries like High-Tech, communication, media, healthcare, retail, and telecom. You will have the opportunity to collaborate with a diverse team of highly talented individuals in an open, laidback environment. Work-life balance is prioritized with flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional development opportunities include Communication skills training, Stress Management programs, professional certifications, and technical and soft skill trainings. GlobalLogic provides competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance, NPS(National Pension Scheme), extended maternity leave, annual performance bonuses, and referral bonuses. Fun perks such as sports events, cultural activities, food on subsidized rates, corporate parties, dedicated GL Zones, rooftop decks, and discounts for popular stores and restaurants are also part of the vibrant office culture at GlobalLogic. About GlobalLogic: GlobalLogic is a leader in digital engineering, helping brands design and build innovative products, platforms, and digital experiences for the modern world. By integrating experience design, complex engineering, and data expertise, GlobalLogic helps clients accelerate their transition into tomorrow's digital businesses. Operating under Hitachi, Ltd., GlobalLogic contributes to driving innovation through data and technology for a sustainable society with a higher quality of life.,

Posted 2 months ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

Skill Data Engineer PowerBI Band in Infosys 5 Role Technology Lead Qualification B.E/Btech Job Description 6 to 10 years relevant experience and able to fulfill a role of managing delivery, coach team members, and lead "best practices and procedures. Power-BI, SSRS, Power-BI report Builder, AAS (SSAS Tabular Model) and MSBI especially into SQL server developer, SSIS developer, ETL developer. Well experienced in creating Data models, power-bi reports on top of model and publishing them overpower-bi services. Experience in creating workspace Vendor Rate Work Location with Zip code Pune ,Hyderabad, Bhubneshwar,

Posted 2 months ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad

Work from Office

• Looking for (4-15) years experience in Data Engineer Strong experience in SQL,TSQL,Azure Data Factory (ADF), Databricks. • Good to have experience in SSIS & Python Notice Period-Immediate Email- sachin@assertivebs.com

Posted 2 months ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Hyderabad

Work from Office

hands-on expertise with ETL tools and logic, with a strong preference for IDMC Application Development/Support: Demonstrated success in either application development or support roles. Python Proficiency: Strong understanding of Python, with practical coding experience AWS: Comprehensive knowledge of AWS services and their applications Airflow: creating and managing Airflow DAG scheduling. Unix & SQL: Solid command of Unix commands, shell scripting, and writing efficient SQL scripts Analytical & Troubleshooting Skills: Exceptional ability to analyze data and resolve complex issues. Development Tasks: Proven capability to execute a variety of development activities with efficiency Insurance Domain Knowledge: Familiarity with the Insurance sector is highly advantageous. Production Data Management: Significant experience in managing and processing production data Work Schedule Flexibility: Open to working in any shift, including 24/7 support, production support, as require Role & responsibilities.

Posted 2 months ago

Apply

4.0 - 6.0 years

5 - 12 Lacs

Pune

Work from Office

Expert in Python coding Review the existing code and optimize it Text parsers and scrappers Review the pipelines coming in Experience in ADF pipelines Ability to guide the team with tasks on regular basis and support them technically Support other leads and coordinate for faster deliverables How AI can be brought, use/create LLMs to better GTM

Posted 2 months ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Hyderabad, Chennai, Coimbatore

Hybrid

Our client is Global IT Service & Consulting Organization Data Software Engineer Exp:5 -12 years Skill: Python, Spark, Azure Databricks/GCP/AWS Location- Hyderabad, Chennai, Coimbatore Notice period: Immediate to 60 days F2F interview on 12th July ,Saturday

Posted 2 months ago

Apply

6.0 - 10.0 years

10 - 17 Lacs

Bengaluru

Remote

Job Summary: We are looking for a highly skilled Data Engineer with 6+ years of experience to join our team on a contract basis. The ideal candidate will have a strong background in data engineering with deep expertise in DBT (Data Build Tool) and Apache Airflow for building robust data pipelines. You will be responsible for designing, developing, and optimizing data workflows to support analytics and business intelligence initiatives. Key Responsibilities: Design, build, and maintain scalable and reliable data pipelines using DBT and Airflow . Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Optimize data transformation workflows to improve efficiency, quality, and maintainability. Implement and maintain data quality checks and validation logic. Monitor pipeline performance, troubleshoot failures, and ensure timely data delivery. Develop and maintain documentation for data processes, models, and flows. Work with cloud data warehouses (e.g., Snowflake , BigQuery , Redshift , etc.) for storage and transformation. Support ETL/ELT jobs and integrate data from multiple sources (APIs, databases, flat files). Ensure best practices in version control, CI/CD, and automation of data workflows. Required Skills and Experience: 6+ years of hands-on experience in data engineering . Strong proficiency with DBT for data modeling and transformation. Experience with Apache Airflow for orchestration and workflow scheduling. Solid understanding of SQL and relational data modeling principles. Experience working with modern cloud data platforms (e.g., Snowflake , BigQuery , Databricks , or Redshift ). Proficiency in Python or similar scripting language for data manipulation and automation. Familiarity with version control systems like Git and collaborative development workflows. Experience with CI/CD tools and automated testing frameworks for data pipelines. Excellent problem-solving skills and ability to work independently. Strong communication and documentation skills. Nice to Have: Experience with streaming data platforms (e.g., Kafka, Spark Streaming). Knowledge of data governance, security, and compliance best practices. Experience with dashboarding tools (e.g., Looker, Tableau, Power BI) for data validation. Exposure to agile development methodologies. Contract Terms: Commitment: Full-time, 8 hours/day Duration: 6 months, with possible extension Location: Remote

Posted 2 months ago

Apply

2.0 - 5.0 years

0 - 3 Lacs

Jaipur

Work from Office

Job Role Data Engineer Job Location Jaipur Job Type Permanent Experience Required- (2-5) Years As a Data Engineer, you will play a critical role in designing, developing, and maintaining our data pipelines and infrastructure. You will work closely with our data scientists, analysts, and other stakeholders to ensure data is accurate, timely, and accessible. Your contributions will directly impact our data-driven decision-making and support our growth. Key Responsibilities: Data Pipeline Development: Design, develop, and implement data pipelines using Azure Data Factory and Databricks to support the ingestion, transformation, and movement of data. ETL Processes: Develop and optimize ETL (Extract, Transform, Load) processes to ensure efficient data flow and transformation. Data Lake Management: Develop and maintain Azure Data Lake solutions, ensuring efficient storage and retrieval of large datasets. Data Warehousing: Work with Azure Synapse Analytics to build and manage scalable data warehousing solutions that enable advanced analytics and reporting. Data Integration: Integrate various data sources into MS-Fabric, ensuring data consistency, quality, and accessibility across different platforms. Performance Optimization: Optimize data processing workflows and storage solutions to improve performance and reduce costs. Database Management: Manage and optimize databases (SQL and NoSQL) to support high-performance queries and data storage requirements. Data Quality: Implement data quality checks and monitoring to ensure accuracy and consistency of data. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver actionable insights. Documentation: Create and maintain comprehensive documentation for data processes, pipelines, and infrastructure, architecture and best practices. Troubleshooting and Support: Identify and resolve issues in data pipelines, data lakes, and warehousing solutions, providing timely support and maintenance. Qualifications: Experience: 2-4 years of experience in data engineering or a related field. Technical Skills: Proficiency with Azure Data Factory, Azure Synapse Analytics, Databricks, and Azure Data Lake Experience with Microsoft Fabric is a plus Strong SQL skills and experience with data warehousing concepts (DWH) Knowledge of data modeling, ETL processes, and data integration Experience with relational databases (e.g., MS-SQL, PostgreSQL, MySQL) Hands-on experience with ETL tools and frameworks (e.g., Apache Airflow, Talend) Knowledge of big data technologies (e.g., Hadoop, Spark) is a plus Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and associated data services (e.g., S3, Redshift, BigQuery) Familiarity with data visualization tools (e.g., Power BI) and experience with programming languages such as Python, Java, or Scala. Experience with schema design and dimensional data modeling Analytical Skills: Strong problem-solving abilities and attention to detail. Communication: Excellent verbal and written communication skills, with the ability to explain technical concepts to non-technical stakeholders. Education: Bachelors degree in computer science, Engineering, Mathematics, or a related field. Advanced degrees or certifications are a plus Thanks & Regards Sulabh Tailang HR-Talent Acquisition Manager |Celebal Technologies |91-9448844746 Sulabh.tailang@celebaltech.com|LinkedIn-sulabhtailang |Twitter-Ersulabh Website-www.celebaltech.com

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies