Jobs
Interviews

271 Data Engineer Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 17.0 years

25 - 30 Lacs

Mumbai, Thane

Work from Office

Manage end to end deliveries for data Engineering, EDW and Data Lake platform. Data modelling 3+ Exp in writing complex SQL queries/procedures/Views/Functions and database objects. Minimum 3 years exp required into cloud computing.

Posted 1 month ago

Apply

6.0 - 10.0 years

12 - 20 Lacs

Pune, Delhi / NCR, Mumbai (All Areas)

Hybrid

Role & responsibilities (Exp is required 6+ Years) Job Description: Enterprise Business Technology is on a mission to support and create enterprise software for our organization. We're a highly collaborative team that interlocks with corporate functions such as Finance and Product teams to deliver value with innovative technology solutions. Each day, thousands of people rely on Enlyte's technology and services to help their customers during challenging life events. We're looking for a remote Senior Data Analytics Engineer for our Corporate Analytics team. Opportunity - Technical lead for our corporate analytics practice using dbt, Dagster, Snowflake and Power BI, SQL and Python Responsibilities Build our data pipelines for our data warehouse in Python working with APIs to source data Build power bi reports and dashboards associated to this process Contribute to our strategy for new data pipelines and data engineering approaches Maintain a medallion based architecture for data analysis with Kimball Participates in daily scrum calls, follows agile SDLC Creates meaningful documentation of their work Follow organizational best practices for dbt and writes maintainable code Qualifications 5+ years of professional experience as a Data Engineer Strong dbt experience (3+ years) and knowledge of modern data stack Strong experience with Snowflake (3+ years) You have experience using Dagster and running complex pipelines (1+ year) Some Python experience, experience with git and Azure Devops Experience with data modeling in Kimball and medallion based structures

Posted 1 month ago

Apply

4.0 - 8.0 years

7 - 17 Lacs

Chennai

Work from Office

Dear Candidate We have a Walk In drive happening for Bigdata developer position on this Saturday. Skill: Bigdata developer Primary Skills :Python-pyspark OR Python -scala Experience : 4- 8 yrs Location: Chennai Notice period : Immediate to 15 days only Mode of Discussion : F2F Date of Interview : 5-Jul-25 (Saturday) Timing : 9:30 AM Venue : Aspire system office - Siruseri If interested Kinldy share your resume to saranya.raghu@aspiresys.com Regards Saranya Raghu

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 16 Lacs

Hyderabad, Pune, Chennai

Hybrid

Data Engineer having good experience on Azure Databricks and Python Must Have Databricks Python Azure Good to have ADF Candidate must be proficient in Databricks

Posted 1 month ago

Apply

10.0 - 18.0 years

20 - 35 Lacs

Pune, Chennai/Gurgaon, Hyderabad/Bengaluru

Work from Office

Looking for Data Engineer Skills:Data Engineer,AWS,GCP Notice Period:0-30 days Location: Hyderabad,Bangalore,Chennai,Pune,Gurgaon

Posted 1 month ago

Apply

6.0 - 9.0 years

20 - 25 Lacs

Bengaluru

Hybrid

Cloud, Artificial Intelligence, Data Engineering Skill to Evaluate : Cloud, Artificial Intelligence, Data Engineering Experience : 6 to 10 Years Location : Bengaluru Job Description : Job Summary: We are looking for an experienced Cloud AI and Data Engineer with a strong background in cloud-native data solutions, AI/ML engineering, and emerging Generative AI (GenAI) technologies. The ideal candidate will have 68 years of hands-on experience in building robust data platforms, deploying scalable ML models, and integrating GenAI solutions across cloud environments. Key Responsibilities: Build and maintain scalable data pipelines and infrastructure for AI and analytics using cloud-native tools (e.g., AWS Glue, Azure Data Factory, GCP Dataflow) Design and implement production-ready GenAI applications using services like Amazon Bedrock , Azure OpenAI , or Google Vertex AI Develop and deploy AI/ML models including transformer-based and LLM (Large Language Model) solutions Integrate GenAI with enterprise workflows using APIs, orchestration layers, and retrieval-augmented generation (RAG) patterns Collaborate with data scientists, product managers, and platform teams to operationalize AI-driven insights and GenAI capabilities Build prompt engineering frameworks, evaluate output quality, and optimize token usage and latency for GenAI deployments Set up monitoring, drift detection, and governance mechanisms for both traditional and GenAI models Implement CI/CD pipelines for data and AI solutions with automated testing and rollback strategies Ensure cloud solutions adhere to data privacy, security, and regulatory compliance standards Required Skills & Qualifications: 68 years of experience in data engineering or machine learning engineering in cloud environments (AWS, Azure, or GCP) Proficiency in Python and SQL; familiarity with PySpark, Java, or Scala is a plus Experience working with GenAI models such as GPT, Claude, or custom LLMs via cloud services (e.g., Bedrock, Azure OpenAI, HuggingFace) Hands-on with prompt design, fine-tuning, vector stores (e.g., FAISS, Pinecone), and knowledge base integrations Experience with MLOps and LLMOps tools (e.g., MLflow, LangChain, SageMaker Pipelines, Weights & Biases) Solid understanding of containerization (Docker), orchestration (Kubernetes), and microservices Knowledge of data lake/warehouse platforms such as S3, Snowflake, BigQuery, or Redshift Familiar with governance frameworks, access control, and responsible AI practices Preferred Qualifications: Certifications in Cloud AI/ML platforms (e.g., AWS Certified Machine Learning, Azure AI Engineer) Experience building RAG systems, vector database search, and multi-turn conversational agents Exposure to real-world GenAI use cases like code generation, chatbots, document summarization, or knowledge extraction Knowledge of OpenAPI, JSON schema validation, and API lifecycle tools

Posted 1 month ago

Apply

3.0 - 6.0 years

15 - 20 Lacs

Bengaluru

Hybrid

Description: Role: Data Engineer/ETL Developer - Talend/Power BI Job Description: 1. Study, analyze and understand business requirements in context to business intelligence and provide the end-to-end solutions. 2. Design and Implement ETL pipelines with data quality and integrity across platforms like Talend Enterprise, informatica 3. Load the data from heterogeneous sources like Oracle, MSSql, File system, FTP services, Rest APIs etc.. 4. Design and map data models to shift raw data into meaningful insights and build data catalog. 5. Develop strong data documentation about algorithms, parameters, models. 6. Analyze previous and present data for better decision making. 7. Make essential technical changes to improvise present business intelligence systems. 8. Optimizing ETL processes for improved performance, monitoring ETL jobs and troubleshooting issues. 9. Lead and oversee the Team deliverables, ensure best practices are followed for development. 10. Participate/lead in requirements gathering and analysis. Required Skillset and Experience: 1. Over all up to 3 years of working experience, preferably in SQL, ETL (Talend) 2. Must have 1+ years of experience in Talend Enterprise/Open studio and related tools like Talend API, Talend Data Catalog, TMC, TAC etc. 3. Must have understanding of database design, data modeling 4. Hands-on experience in any of the coding language (Java or Python etc.) Secondary Skillset/Good to have: 1. Experience in BI Tool like MS Power Bi. 2. Utilize Power BI to build interactive and visually appealing dashboards and reports. Required Personal & Interpersonal Skills • Strong Analytical skills • Good communication skills, both written and verbal. • Highly motivated and result-oriented • Self-driven independent work ethics that drives internal and external accountability • Ability to interpret instructions to executives and technical resources. • Advanced problem-solving skills dealing with complex distributed applications. • Experience of working in multicultural environment.Enable Skills-Based Hiring No Additional Details Planned Resource Unit : (55)IT_TRUCKS;(11)F/TC - Application Engineer - 3-6 Yrs;Business Intelligence;(Z2)3-6 Years

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 17 Lacs

Coimbatore

Work from Office

Position: Data Engineer Experience: 5 - 10 years Location: Coimbatore (WFO) Notice period: Immediate Job Type: Full time Skills: Data Engineer, Spark, Scala, Python, Big Data Job Description: Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Solid understanding of batch and streaming data processing techniques. Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion. Expert-level ability to write complex, optimized SQL queries across extensive data volumes. Experience on HDFS, Nifi, Kafka. Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB Familiarity with Agile methodologies. Obsession for service observability, instrumentation, monitoring, and alerting. Knowledge or experience in architectural best practices for building data lakes.

Posted 1 month ago

Apply

8.0 - 12.0 years

12 - 22 Lacs

Hyderabad, Bengaluru

Work from Office

Role & responsibilities 8-12 years of professional work experience in a relevant field Proficient in Azure Databricks, ADF, Delta Lake, SQL Data Warehouse, Unity Catalog, Mongo DB, Python Experience/ prior knowledge on semi structure data and Structured Streaming, Azure synapse analytics, data lake, data warehouse. Proficient in creating Azure Data Factory pipelines for ETL/ELT processing ; copy activity, custom Azure development etc. Lead the technical team of 4-6 resource. Prior Knowledge in Azure DevOps and CI/CD processes including Github . Good knowledge of SQL and Python for data manipulation, transformation, and analysis knowledge on Power bi would be beneficial. Understand business requirements to set functional specifications for reporting applications

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Pune

Work from Office

Role & responsibilities Ideally, we are looking for a 60:40 mix, with stronger capabilities on the Data Engineering side, along with working knowledge of Machine Learning and Data Science conceptsespecially those who can pick up tasks in Agentic AI, OpenAI, and related areas as required in the future.

Posted 1 month ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Hyderabad, Bangalore Rural, Bengaluru

Work from Office

We are seeking a highly skilled Snowflake Developer to join our team in Bangalore. The ideal candidate will have extensive experience in designing, implementing, and managing Snowflake-based data solutions. This role involves developing data architectures and ensuring the effective use of Snowflake to drive business insights and innovation. Key Responsibilities: Design and implement scalable, efficient, and secure Snowflake solutions to meet business requirements. Develop data architecture frameworks, standards, and principles, including modeling, metadata, security, and reference data. Implement Snowflake-based data warehouses, data lakes, and data integration solutions. Manage data ingestion, transformation, and loading processes to ensure data quality and performance. Collaborate with business stakeholders and IT teams to develop data strategies and ensure alignment with business goals. Drive continuous improvement by leveraging the latest Snowflake features and industry trends. Qualifications: Bachelors or Master’s degree in Computer Science, Information Technology, Data Science, or a related field. 8+ years of experience in data architecture, data engineering, or a related field. Extensive experience with Snowflake, including designing and implementing Snowflake-based solutions. Must have exposure working in Airflow Proven track record of contributing to data projects and working in complex environments. Familiarity with cloud platforms (e.g., AWS, GCP) and their data services. Snowflake certification (e.g., SnowPro Core, SnowPro Advanced) is a plus.

Posted 1 month ago

Apply

4.0 - 6.0 years

5 - 15 Lacs

Bengaluru

Work from Office

Dear Candidate, We are hiring Data Engineers for one of our prestigious clients a product-based company. Job Details: Position: Data Engineer Work Location: Bangalore Experience: 4 to 7 Years Shift: Day Shift Qualification: B.E / B.Tech / M.Tech Notice Period: Immediate to 20 Days Interview Mode: Only Face-to-Face (F2F) interviews in Bangalore for technical discussions Job Description: 4 to 6 years of experience in data engineering and integration projects Hands-on experience with cloud-based data integration platforms (IICS CDI on IDMC Platform) must-have Exposure to various source systems such as SFDC, Marketo, Azure, AWS, and relational databases like Oracle, SQL Server good to have Intermediate skills in Unix shell scripting and Python Working knowledge of Jupyter Notebooks, Databricks, ADLS Gen2, and SQL Data Warehouse Strong understanding of data modeling, including conceptual, logical, and physical models Experience in Change Data Capture (CDC) and Slowly Changing Dimensions (SCD Type 1 & 2) Advanced SQL/PLSQL skills must be able to work with complex queries Proven track record of delivering high-quality technical solutions Exposure to Agile/Scrum methodologies – nice to have Knowledge of Star and Snowflake schemas, and experience with modeling tools like Erwin, Visio Strong communication skills and ability to work in a global onshore-offshore model Self-driven, organized, and capable of handling multiple priorities in a dynamic environment To Apply: Please share your updated resume to mary@jyopa.com with the following details: Current Company Current Location CTC Expected CTC Notice Period Note: Only candidates who are available for face-to-face interviews in Bangalore will be considered. Regards, Reshma Mary L. 6361518594 (whats app)

Posted 1 month ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams

Posted 1 month ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Pune, Bengaluru

Hybrid

Job Role & responsibilities: - Responsible for architecture designing, building and deploying data systems, pipelines etc Responsible for Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Responsible for Designing, Implementation, Development & Migration Migrate data from traditional database systems to Cloud environment Architect and implement ETL and data movement solutions. Technical Skill, Qualification & experience required:- 5-7 years of experience in Data Engineering, Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services-Azure Strong hands-on experience for working with Streaming dataset Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only

Posted 1 month ago

Apply

5.0 - 8.0 years

13 - 20 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Work from Office

Job Description: Develop, and implement ETL processes using Python and SQL to extract, transform, and load data from various sources into our data warehouse. Optimize and maintain existing ETL workflows and data pipelines to improve performance and scalability. Design, develop, and maintain efficient, reusable, and reliable Python code and should support in python version upgrade activities. Collaborate with cross-functional teams to understand data requirements and ensure data integrity and quality. Monitor and troubleshoot data processing systems to ensure timely and accurate data delivery. Develop and maintain documentation related to ETL processes, data models, and workflows. Participate in code reviews and provide constructive feedback to team members. Stay up-to-date with industry trends and emerging technologies to continuously improve our data engineering practices. Skills & Qualifications: Bachelors degree in IT, computer science, computer engineering, or similar Proven experience as a Data Engineer or ETL Developer, with a focus on Python and SQL. Minimum 5 years of experience in ETL Proficiency in programming languages such as Python for data engineering tasks. Should be able to support in python version upgrade activities. Strong understanding of ETL concepts and data warehousing principles. Proficiency in writing complex SQL queries and optimizing database performance. Familiarity with cloud platforms such as Azure, or OCI is a plus. Demonstrated experience in design and delivering data platforms for Business Intelligence and Data Warehouse. Experience with version control systems, such as Git. Familiar with Agile methodology and Agile working environment. Ability to work alone with POs, BAs, Architects

Posted 1 month ago

Apply

7.0 - 11.0 years

30 - 35 Lacs

Bengaluru

Work from Office

1. The resource should have knowledge on Data Warehouse and Data Lake 2. Should aware of building data pipelines using Pyspark 3. Should be strong in SQL skills 4. Should have exposure to AWS environment and services like S3, EC2, EMR, Athena, Redshift etc 5. Good to have programming skills in Python

Posted 1 month ago

Apply

6.0 - 11.0 years

35 - 50 Lacs

Pune, Gurugram, Delhi / NCR

Hybrid

Role: Snowflake Data Engineer Mandatory Skills: #Snowflake, #AZURE, #Datafactory, SQL, Python, #DBT / #Databricks . Location (Hybrid) : Bangalore, Hyderabad, Chennai, Pune, Gurugram & Noida. Budget: Up to 50 LPA' Notice: Immediate to 30 Days Serving Notice Experience: 6-11 years Key Responsibilities: Design and develop ETL/ELT pipelines using Azure Data Factory , Snowflake , and DBT . Build and maintain data integration workflows from various data sources to Snowflake. Write efficient and optimized SQL queries for data extraction and transformation. Work with stakeholders to understand business requirements and translate them into technical solutions. Monitor, troubleshoot, and optimize data pipelines for performance and reliability. Maintain and enforce data quality, governance, and documentation standards. Collaborate with data analysts, architects, and DevOps teams in a cloud-native environment. Must-Have Skills: Strong experience with Azure Cloud Platform services. Proven expertise in Azure Data Factory (ADF) for orchestrating and automating data pipelines. Proficiency in SQL for data analysis and transformation. Hands-on experience with Snowflake and SnowSQL for data warehousing. Practical knowledge of DBT (Data Build Tool) for transforming data in the warehouse. Experience working in cloud-based data environments with large-scale datasets. Good-to-Have Skills: Experience with Azure Data Lake , Azure Synapse , or Azure Functions . Familiarity with Python or PySpark for custom data transformations. Understanding of CI/CD pipelines and DevOps for data workflows. Exposure to data governance , metadata management , or data catalog tools. Knowledge of business intelligence tools (e.g., Power BI, Tableau) is a plus. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 5+ years of experience in data engineering roles using Azure and Snowflake. Strong problem-solving, communication, and collaboration skills.

Posted 1 month ago

Apply

6.0 - 11.0 years

35 - 50 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Role: Snowflake Data Engineer Mandatory Skills: #Snowflake, #AZURE, #Datafactory, SQL, Python, #DBT / #Databricks . Location (Hybrid) : Bangalore, Hyderabad, Chennai, Pune, Gurugram & Noida. Budget: Up to 50 LPA' Notice: Immediate to 30 Days Serving Notice Experience: 6-11 years Key Responsibilities: Design and develop ETL/ELT pipelines using Azure Data Factory , Snowflake , and DBT . Build and maintain data integration workflows from various data sources to Snowflake. Write efficient and optimized SQL queries for data extraction and transformation. Work with stakeholders to understand business requirements and translate them into technical solutions. Monitor, troubleshoot, and optimize data pipelines for performance and reliability. Maintain and enforce data quality, governance, and documentation standards. Collaborate with data analysts, architects, and DevOps teams in a cloud-native environment. Must-Have Skills: Strong experience with Azure Cloud Platform services. Proven expertise in Azure Data Factory (ADF) for orchestrating and automating data pipelines. Proficiency in SQL for data analysis and transformation. Hands-on experience with Snowflake and SnowSQL for data warehousing. Practical knowledge of DBT (Data Build Tool) for transforming data in the warehouse. Experience working in cloud-based data environments with large-scale datasets. Good-to-Have Skills: Experience with Azure Data Lake , Azure Synapse , or Azure Functions . Familiarity with Python or PySpark for custom data transformations. Understanding of CI/CD pipelines and DevOps for data workflows. Exposure to data governance , metadata management , or data catalog tools. Knowledge of business intelligence tools (e.g., Power BI, Tableau) is a plus. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 5+ years of experience in data engineering roles using Azure and Snowflake. Strong problem-solving, communication, and collaboration skills.

Posted 1 month ago

Apply

7.0 - 12.0 years

30 - 45 Lacs

Noida, Pune, Gurugram

Hybrid

Role: Lead Data Engineer Experience: 7-12 years Must-Have: 7+ years of relevant experienceinData Engineeringand delivery. 7+ years of relevant work experience in Big Data Concepts. Worked on cloud implementations. Have experience in Snowflake, SQL, AWS (glue, EMR, S3, Aurora, RDS, AWS architecture) Good experience withAWS cloudand microservices AWS glue, S3, Python, and Pyspark. Good aptitude, strong problem-solving abilities, analytical skills, and ability to take ownership asappropriate. Should be able to do coding, debugging, performance tuning, and deploying the apps to the Production environment. Experience working in Agile Methodology Ability to learn and help the team learn new technologiesquickly. Excellentcommunication and coordination skills Good to have: Have experience in DevOps tools (Jenkins, GIT etc.) and practices, continuous integration, and delivery (CI/CD) pipelines. Spark, Python, SQL (Exposure to Snowflake), Big Data Concepts, AWS Glue. Worked on cloud implementations (migration, development, etc. Role & Responsibilities: Be accountable for the delivery of the project within the defined timelines with good quality. Working with the clients and Offshore leads to understanding requirements, coming up with high-level designs, and completingdevelopment,and unit testing activities. Keep all the stakeholders updated about the task status/risks/issues if there are any. Keep all the stakeholders updated about the project status/risks/issues if there are any. Work closely with the management wherever and whenever required, to ensure smooth execution and delivery of the project. Guide the team technically and give the team directions on how to plan, design, implement, and deliver the projects. Education: BE/B.Tech from a reputed institute.

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Bengaluru

Remote

Greetings from tsworks Technologies India Pvt We are hiring for Sr. Data Engineer / Lead Data Engineer, if you are interested please share your CV to mohan.kumar@tsworks.io About This Role tsworks Technologies India Private Limited is seeking driven and motivated Senior Data Engineers to join its Digital Services Team. You will get hands-on experience with projects employing industry-leading technologies. This would initially be focused on the operational readiness and maintenance of existing applications and would transition into a build and maintenance role in the long run. Position: Senior Data Engineer / Lead Data Engineer Experience : 5 to 11 Years Location : Bangalore, India / Remote Mandatory Required Qualification Strong proficiency in Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage, etc. Expertise in DevOps and CI/CD implementation Excellent Communication Skills Skills & Knowledge Bachelor's or masters degree in computer science, Engineering, or a related field. 5 to 10 Years of experience in Information Technology, designing, developing and executing solutions. 3+ Years of hands-on experience in designing and executing data solutions on Azure cloud platforms as a Data Engineer. Strong proficiency in Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage, etc. Familiarity with Snowflake data platform is a good to have experience. Hands-on experience in data modelling, batch and real-time pipelines, using Python, Java or JavaScript and experience working with Restful APIs are required. Expertise in DevOps and CI/CD implementation. Hands-on experience with SQL and NoSQL databases. Hands-on experience in data modelling, implementation, and management of OLTP and OLAP systems. Experience with data modelling concepts and practices. Familiarity with data quality, governance, and security best practices. Knowledge of big data technologies such as Hadoop, Spark, or Kafka. Familiarity with machine learning concepts and integration of ML pipelines into data workflows Hands-on experience working in an Agile setting. Is self-driven, naturally curious, and able to adapt to a fast-paced work environment. Can articulate, create, and maintain technical and non-technical documentation. Public cloud certifications are desired.

Posted 1 month ago

Apply

8.0 - 12.0 years

25 - 35 Lacs

Hyderabad

Hybrid

Job description Job Summary: We are seeking a skilled and experienced Data Engineer with expertise in Oracle Data Integrator (ODI) and Oracle Business Intelligence (OBI) to join our dynamic team. The ideal candidate will play a crucial role in designing, developing, and maintaining data integration and business intelligence solutions. Responsibilities: Collaborate with cross-functional teams to understand data requirements and implement effective data integration solutions using Oracle Data Integrator (ODI). Develop and optimize ETL processes to extract, transform, and load data from various sources into the data warehouse. Design and implement data models to support business intelligence and reporting needs using Oracle Business Intelligence (OBI). Ensure the reliability, scalability, and performance of data engineering solutions in a production environment. Troubleshoot and resolve data-related issues, ensuring data quality and integrity. Stay updated on industry trends and best practices in Oracle ODI/OBI and contribute to continuous improvement initiatives. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field with 5-8 years of experience in Data Engineering. Proven experience as a Data Engineer with a focus on Oracle ODI/OBI. Strong proficiency in Oracle Data Integrator (ODI) for ETL processes. Hands-on experience with Oracle Business Intelligence (OBI) for designing and developing BI solutions. Solid understanding of data modeling concepts and techniques. Excellent SQL skills for data manipulation and analysis. Familiarity with data warehousing principles and best practices. Strong problem-solving and analytical skills. Effective communication and collaboration skills. Preferred Qualifications: Oracle ODI/OBI certifications. Experience with performance tuning and optimization of data integration processes. Knowledge of other data integration and BI tools.

Posted 1 month ago

Apply

8.0 - 10.0 years

25 - 27 Lacs

Bengaluru

Work from Office

Key Skills: Data Engineer, Data Integration, Informatica, Pyspark, Informatica MDM. Roles and Responsibilities: Utilize Informatica IDMC tools including Data Profiling, Data Quality, and Data Integration modules to support data initiatives. Implement and maintain robust Data Quality frameworks, ensuring data accuracy, consistency, and reliability. Work on ETL (Extract, Transform, Load) processes to support business intelligence and analytics needs. Participate in agile product teams to design, develop, and deliver data solutions aligned with business requirements. Perform data validation, cleansing, and profiling to meet data governance standards. Collaborate with cross-functional teams to understand business needs and translate them into technical solutions. Assist in the design and execution of QA processes to maintain high standards in data pipelines. Support integration with cloud platforms such as MS Azure and utilize DevOps tools for deployment. Contribute to innovation and improvement initiatives including the development of new features with modern data tools like Databricks. Maintain clear documentation and ensure alignment with data governance and data management principles. Optionally, develop visualizations using Power BI for data storytelling and reporting. Experience Requirement: 8-10 years of experience working on Data Quality projects. At least 3 years of hands-on experience with Informatica Data Quality modules. Strong understanding of data profiling, validation, cleansing, and overall data quality concepts. Experience with ETL processes and QA/testing frameworks. Basic knowledge of Microsoft Azure platform and services. Exposure to Data Governance and Management practices. Experience in agile environments using tools like DevOps. Strong analytical, problem-solving, and troubleshooting skills. Proficient in English with excellent communication and collaboration abilities. Nice to have: experience developing with Power BI. Education: Any Graduation.

Posted 1 month ago

Apply

6.0 - 8.0 years

25 - 30 Lacs

Bengaluru

Work from Office

6+ years of experience in information technology, Minimum of 3-5 years of experience in managing and administering Hadoop/Cloudera environments. Cloudera CDP (Cloudera Data Platform), Cloudera Manager, and related tools. Hadoop ecosystem components (HDFS, YARN, Hive, HBase, Spark, Impala, etc.). Linux system administration with experience with scripting languages (Python, Bash, etc.) and configuration management tools (Ansible, Puppet, etc.) Tools like Kerberos, Ranger, Sentry), Docker, Kubernetes, Jenkins Cloudera Certified Administrator for Apache Hadoop (CCAH) or similar certification. Cluster Management, Optimization, Best practice implementation, collaboration and support.

Posted 1 month ago

Apply

5.0 - 10.0 years

30 - 35 Lacs

Chennai, Bengaluru

Work from Office

Data Engineer: Experienced Kstream + Ksql dev with in-depth knowledge of specific client systems TAHI Contract and Application, ISP Contract and Application modules. Performs data analysis and writes code to implement functional requirements per LLD and client processes. Minimum skills levels in this specific area Current roles are 5 + years plus Insurnace domain experience These are technical roles, and the prime requirement is for Kstream/ Java/ KSLQDB/ Kafka

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Pune

Work from Office

Role & responsibilities Update from Client - Ideally, we are looking for a 60:40 mix, with stronger capabilities on the Data Engineering side, along with working knowledge of Machine Learning and Data Science conceptsespecially those who can pick up tasks in Agentic AI, OpenAI, and related areas as required in the future.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies