Home
Jobs

1766 Data Engineering Jobs - Page 43

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

15.0 - 20.0 years

13 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Architect Project Role Description : Define the data requirements and structure for the application. Model and design the application data structure, storage and integration. Must have skills : Data Engineering Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Architect, you will define the data requirements and structure for the application. Your typical day will involve modeling and designing the application data structure, storage, and integration, ensuring that the data architecture aligns with the overall business objectives and technical specifications. You will collaborate with various teams to ensure that the data architecture is robust, scalable, and efficient, while also addressing any challenges that arise during the development process. Your role will be pivotal in guiding the team towards best practices in data management and architecture, ultimately contributing to the success of the application and the organization as a whole. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Develop and maintain documentation related to data architecture and design. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Strong understanding of data modeling techniques and best practices.- Experience with data integration tools and ETL processes.- Familiarity with cloud data platforms and services.- Ability to design and implement data storage solutions. Additional Information:- The candidate should have minimum 7.5 years of experience in Data Engineering.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse, Core Banking, PySpark Good to have skills : AWS BigDataMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data flow and processing stages. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse, Core Banking, PySpark.- Good To Have Skills: Experience with AWS BigData.- Strong understanding of data modeling and database design principles.- Experience with data integration tools and ETL processes.- Familiarity with data governance and data quality frameworks. Additional Information:- The candidate should have minimum 5 years of experience in Snowflake Data Warehouse.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Informatica Data Quality Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with team members to enhance data workflows and contribute to the overall efficiency of data management practices within the organization. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and implementation of data architecture to support data initiatives.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Informatica Data Quality.- Strong understanding of data integration techniques and ETL processes.- Experience with data quality assessment and improvement methodologies.- Familiarity with data governance principles and best practices.- Ability to work with large datasets and perform data cleansing. Additional Information:- The candidate should have minimum 3 years of experience in Informatica Data Quality.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Gurugram

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs, while also troubleshooting and optimizing existing data workflows to enhance performance and reliability. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Develop and implement best practices for data management and governance.- Monitor and optimize data pipelines to ensure efficiency and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse.- Strong understanding of data modeling and database design principles.- Experience with ETL tools and data integration techniques.- Familiarity with cloud data warehousing solutions and architecture.- Knowledge of SQL and data querying languages. Additional Information:- The candidate should have minimum 5 years of experience in Snowflake Data Warehouse.- This position is based at our Gurugram office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse, Core Banking, PySpark Good to have skills : AWS BigDataMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs. Your role will also include troubleshooting data issues and optimizing data workflows to enhance performance and reliability. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve data processes to ensure efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse, Core Banking, PySpark.- Good To Have Skills: Experience with AWS BigData.- Strong understanding of data modeling and database design principles.- Experience with data integration tools and ETL processes.- Familiarity with cloud-based data solutions and architectures. Additional Information:- The candidate should have minimum 7.5 years of experience in Snowflake Data Warehouse.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse, Core Banking, PySpark Good to have skills : AWS BigDataMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and deliver effective solutions that meet business needs, while also troubleshooting any issues that arise in the data processing workflow. Your role will be pivotal in enhancing the efficiency and reliability of data operations within the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processing workflows to optimize performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse, Core Banking, PySpark.- Good To Have Skills: Experience with AWS BigData.- Strong understanding of data modeling and database design principles.- Experience with data integration tools and ETL processes.- Familiarity with cloud-based data solutions and architectures. Additional Information:- The candidate should have minimum 7.5 years of experience in Snowflake Data Warehouse.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

3.0 - 8.0 years

0 - 3 Lacs

Hyderabad

Work from Office

Naukri logo

Job Overview: We are seeking a skilled and proactive Machine Learning Engineer to join our smart manufacturing initiative. You will play a pivotal role in building data pipelines, developing ML models for defect prediction, and implementing closed-loop control systems to improve production quality. Responsibilities: Data Engineering & Pipeline Support: Validate and ensure correct data flow from Influx DB/CDL to Smart box/Databricks platforms. Collaborate with data scientists to support model development through accurate data provisioning. Provide ongoing support in resolving data pipeline issues and performing ad-hoc data extractions. ML Model Development: Develop three distinct ML models to predict different types of defects using historical production data. Predict short-term outcomes (next 5 minutes) using techniques like artificial sampling and dimensionality reduction. Ensure high model performance: Accuracy 95%, Precision & Recall 80%. Extract and present feature importance to support model interpretability. Closed-loop Control Architecture: Implement end-to-end ML-driven automation to proactively correct machine settings based on model predictions. Key architecture components include: Real-time data ingestion from PLCs via Influx DB/CDL. Model deployment and inference on Smart box. Output pipeline to share actionable recommendations via PLC tags. Automated retraining pipeline in the cloud triggered by model drift or recommendation deviations. Qualifications: Proven experience with real-time data streaming from industrial systems (PLCs, Influx DB/CDL). Hands-on experience in building and deploying ML models in production. Strong understanding of data preprocessing, dimensionality reduction, and synthetic data techniques. Familiarity with cloud-based retraining workflows and model performance monitoring. Experience in smart manufacturing or predictive maintenance is a plus.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Navi Mumbai

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Alteryx Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Be involved in the end-to-end data management process. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Lead data solution architecture design.- Implement data governance policies and procedures.- Optimize data storage and retrieval processes. Professional & Technical Skills: - Must To Have Skills: Proficiency in Alteryx.- Strong understanding of data modeling and database design.- Experience with data integration tools and techniques.- Knowledge of data warehousing concepts.- Hands-on experience with SQL and scripting languages. Additional Information:- The candidate should have a minimum of 5 years of experience in Alteryx.- This position is based at our Mumbai office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

2.0 - 6.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact! IBM’s Cloud Services are focused on supporting clients on their cloud journey across any platform to achieve their business goals. It encompasses Cloud Advisory, Architecture, Cloud Native Development, Application Portfolio Migration, Modernization, and Rationalization as well as Cloud Operations. Cloud Services supports all public/private/hybrid Cloud deployments: IBM Bluemix/IBM Cloud/Red Hat/AWS/ Azure/Google and client private environments. Cloud Services has the best Cloud developer architect Complex SI, Sys Ops and delivery talent delivered through our GEO CIC Factory model. As a member of our Cloud Practice you will be responsible for defining and implementing application cloud migration, modernisation and rationalisation solutions for clients across all sectors. You will support mobilisation and help to lead the quality of our programmes and services, liaise with clients and provide consulting services including: Create cloud migration strategies; defining delivery architecture, creating the migration plans, designing the orchestration plans and more. Assist in creating and executing of migration run books Evaluate source cloud (Physical Virtual and Cloud) and target Workloads. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise GCP using pub sub, big query, dataflow, cloud workflow/cloud scheduler, cloud run, data proc, Cloud FunctionCloud data engineers with GCP PDE certification and working experience with GCP. Building end to end data pipelines in GCP using pub sub, big query, dataflow, cloud workflow/cloud scheduler, cloud run, data proc, Cloud Function Experience in logging and monitoring of GCP services and Experience in Terraform and infrastructure automation. Expertise in Python coding language Develops data engineering solutions on Google Cloud ecosystem and Supports and maintains data engineering solutions on Google Cloud ecosystem. Preferred technical and professional experience Stay updated with the latest trends and advancements in cloud technologies, frameworks, and tools. Conduct code reviews and provide constructive feedback to maintain code quality and ensure adherence to best practices. Troubleshoot and debug issues, and deploy applications to the cloud platform

Posted 3 weeks ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Gurugram

Work from Office

Naukri logo

As a BigData Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Big Data Developer, Hadoop, Hive, Spark, PySpark, Strong SQL. Ability to incorporate a variety of statistical and machine learning techniques. Basic understanding of Cloud (AWS,Azure, etc). Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience Basic understanding or experience with predictive/prescriptive modeling skills You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions

Posted 3 weeks ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Kochi

Work from Office

Naukri logo

As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 3 weeks ago

Apply

3.0 - 7.0 years

11 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

Your Role Experience in data engineering and end-to-end implementation of CDP projects. Proficient in SQL, CDP (TreasureData), Python/Dig-Dag, Presto/SQL, and data engineering. Hands-on experience with Treasure Data CDP implementation and management. Excellent SQL skills, including advanced query writing and optimization. Oversee the end-to-end maintenance and operation of the Treasure Data CDP. Familiarity with data integration, API operations, and audience segmentation. Your profile Experience in unifying data across multiple brands and regions, ensuring consistency and accuracy. Ability to create and manage data workflows in Treasure Data Collaborate with cross-functional teams to ensure successful data integration and usage. Troubleshoot and optimize data pipelines and processes for scalability and performance. Stay updated on the latest features and best practices in Treasure Data and related technologies.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Do 1. Instrumental in understanding the requirements and design of the product/ software Develop software solutions by studying information needs, studying systems flow, data usage and work processes Investigating problem areas followed by the software development life cycle Facilitate root cause analysis of the system issues and problem statement Identify ideas to improve system performance and impact availability Analyze client requirements and convert requirements to feasible design Collaborate with functional teams or systems analysts who carry out the detailed investigation into software requirements Conferring with project managers to obtain information on software capabilities 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries, Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: DataBricks - Data Engineering. Experience: 3-5 Years.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Role Description: We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and optimize data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Masters degree and 3 to 4 + years of Computer Science, IT or related field experience OR Bachelors degree and 5 to 8 + years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

3 - 8 Lacs

Noida

Work from Office

Naukri logo

BI Solutioning & Data Engineering • Design, build, and manage end-to-end Business Intelligence solutions, integrating structured and unstructured data from internal and external sources. • Architect and maintain scalable data pipelines using cloud-native services (e.g., AWS, Azure, GCP). • Implement ETL/ELT processes to ensure data quality, transformation, and availability for analytics and reporting. Market Intelligence & Analytics Enablement • Support the Market Intelligence team by building dashboards, visualizations, and data models that reflect competitive, market, and customer insights. • Work with research analysts to convert qualitative insights into measurable datasets. • Drive the automation of insight delivery, enabling real-time or near real-time updates. Visualization & Reporting • Design interactive dashboards and executive-level visual reports using tools such as Power BI, or Tableau. • Maintain data storytelling standards to deliver clear, compelling narratives aligned with strategic objectives. Stakeholder Collaboration • Act as a key liaison between business users, strategy teams, research analysts, and IT/cloud engineering. • Translate analytical and research needs into scalable, sustainable BI solutions. • Educate internal stakeholders on the capabilities of BI platforms and insights delivery pipelines Preferred: Cloud Infrastructure & Data Integration • Collaborate with cloud engineering teams to deploy BI tools and data lakes in a cloud environment. • Ensure data warehousing architecture is aligned with market research and analytics needs. • Optimize data models and storage for scalability, performance, and security. Total Experience Expected: 06-09 years

Posted 3 weeks ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Chennai

Work from Office

Naukri logo

Job Summary: We are seeking a skilled Big Data Tester & Developer to design, develop, and validate data pipelines and applications on large-scale data platforms. You will work on data ingestion, transformation, and testing workflows using tools from the Hadoop ecosystem and modern data engineering stacks. Experience - 6-12 years Key Responsibilities: • Develop and test Big Data pipelines using Spark, Hive, Hadoop, and Kafka • Write and optimize PySpark/Scala code for data processing • Design test cases for data validation, quality, and integrity • Automate testing using Python/Java and tools like Apache Nifi, Airflow, or DBT • Collaborate with data engineers, analysts, and QA teams Key Skills: • Strong hands-on experience in Big Data tools: Spark, Hive, HDFS, Kafka • Proficient in PySpark, Scala, or Java • Experience in data testing, ETL validation, and data quality checks • Familiarity with SQL, NoSQL, and data lakes • Knowledge of CI/CD, Git, and automation frameworks We are looking for a skilled PostgreSQL Developer/DBA to design, implement, optimize, and maintain our PostgreSQL database systems. You will work closely with developers and data teams to ensure high performance, scalability, and data integrity. Experience - 6 to 12 years Key Responsibilities: • Develop complex SQL queries, stored procedures, and functions • Optimize query performance and database indexing • Manage backups, replication, and security • Monitor and tune database performance • Support schema design and data migrations Key Skills: • Strong hands-on experience with PostgreSQL • Proficient in SQL, PL/pgSQL scripting • Experience in performance tuning, query optimization, and indexing • Familiarity with logical replication, partitioning, and extensions • Exposure to tools like pgAdmin, psql, or PgBouncer

Posted 3 weeks ago

Apply

8.0 - 10.0 years

27 - 42 Lacs

Chennai

Work from Office

Naukri logo

Job Summary: We are seeking a skilled and motivated Backend/Data Engineer with hands-on experience in MongoDB and Neo4j to design and implement data-driven applications. The ideal candidate will be responsible for building robust database systems, integrating complex graph and document-based data models, and collaborating with cross-functional teams. Experience - 6- 12 years Key Responsibilities: • Design, implement, and optimize document-based databases using MongoDB. • Model and manage connected data using Neo4j (Cypher query language). • Develop RESTful APIs and data services to serve and manipulate data stored in MongoDB and Neo4j. • Implement data pipelines for data ingestion, transformation, and storage. • Optimize database performance and ensure data integrity and security. • Collaborate with frontend developers, data scientists, and product managers. • Maintain documentation and support for database solutions. Required Skills: • Strong proficiency in MongoDB: schema design, indexing, aggregation framework. • Solid experience with Neo4j: graph modeling, Cypher queries, performance tuning. • Programming proficiency in Python, Node.js, or Java. • Familiarity with REST APIs, GraphQL, or gRPC. • Experience with data modeling (both document and graph models). • Knowledge of data security, backup, and recovery techniques. Preferred Skills: • Experience with Mongoose, Spring Data MongoDB, or Neo4j-OGM. • Familiarity with data visualization tools (e.g., Neo4j Bloom). • Experience with Docker, Kubernetes, or other DevOps tools. • Exposure to other databases (e.g., PostgreSQL, Redis).

Posted 3 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Role Description: We are looking for highly motivated expert Data Engineer who can own the design & development of complex data pipelines, solutions and frameworks. The ideal candidate will be responsible to design, develop, and maintain data pipelines, data integration frameworks, and metadata-driven architectures that enable seamless data access and analytics. This role prefers deep expertise in big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain complex ETL/ELT data pipelines in Databricks using PySpark, Scala, and SQL to process large-scale datasets Understand the biotech/pharma or related domains & build highly efficient data pipelines to migrate and deploy complex data across systems Design and Implement solutions to enable unified data access, governance, and interoperability across hybrid cloud environments Ingest and transform structured and unstructured data from databases (PostgreSQL, MySQL, SQL Server, MongoDB etc.), APIs, logs, event streams, images, pdf, and third-party platforms Ensuring data integrity, accuracy, and consistency through rigorous quality checks and monitoring Expert in data quality, data validation and verification frameworks Innovate, explore and implement new tools and technologies to enhance efficient data processing Proactively identify and implement opportunities to automate tasks and develop reusable frameworks Work in an Agile and Scaled Agile (SAFe) environment, collaborating with cross-functional teams, product owners, and Scrum Masters to deliver incremental value Use JIRA, Confluence, and Agile DevOps tools to manage sprints, backlogs, and user stories. Support continuous improvement, test automation, and DevOps practices in the data engineering lifecycle Collaborate and communicate effectively with the product teams, with cross-functional teams to understand business requirements and translate them into technical solutions Must-Have Skills: Hands-on experience in data engineering technologies such as Databricks, PySpark, SparkSQL Apache Spark, AWS, Python, SQL, and Scaled Agile methodologies. Proficiency in workflow orchestration, performance tuning on big data processing. Strong understanding of AWS services Ability to quickly learn, adapt and apply new technologies Strong problem-solving and analytical skills Excellent communication and teamwork skills Experience with Scaled Agile Framework (SAFe), Agile delivery practices, and DevOps practices. Good-to-Have Skills: Data Engineering experience in Biotechnology or pharma industry Experience in writing APIs to make the data available to the consumers Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Education and Professional Certifications Minimum 5 to 8 years of Computer Science, IT or related field experience AWS Certified Data Engineer preferred Databricks Certificate preferred Scaled Agile SAFe certification preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills.

Posted 3 weeks ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

Hyderabad, Lucknow

Hybrid

Naukri logo

We are seeking a highly capable Data Scientist cum Machine Learning Engineer to support the design, development, and deployment of advanced AI-driven solutions for real-time defect prediction and closed-loop control in a manufacturing environment. The role involves building robust data pipelines, developing high-accuracy ML models, and integrating those models into real-time control systems for automated corrective actions. Role & responsibilities: Validate and maintain the correct flow of data from InfluxDB/CDL to Smartbox/Databricks , ensuring data consistency and readiness for modeling. Develop and support data pipelines for both batch and real-time processing, including ad-hoc data extraction and corrections. Build and deploy three separate machine learning models to predict three distinct defect types using historical time-series data, aiming to forecast outcomes for the next 5 minutes. Ensure model performance meets predefined benchmarks: Accuracy 95% Precision & Recall 80% Generate interpretable insights through feature importance analysis. Apply advanced techniques such as artificial sampling , feature engineering , and dimensionality reduction to enhance model performance and reliability. Deploy models on Smartbox edge devices and integrate outputs with manufacturing PLCs to suggest setpoint adjustments aimed at preventing defects. Design and implement an output pipeline that shares model recommendations with the correct PLC tags in real-time. Develop a robust automated retraining pipeline in the cloud to handle model drift by comparing predicted recommendations against actual production centerlines. Collaborate closely with cross-functional teams including data scientists, automation engineers, and operations personnel. Preferred candidate profile: Bachelors in Computer Science, Data Science or a related discipline. Strong expertise in machine learning , particularly time-series forecasting and classification models. Proficiency in Python and relevant ML libraries (e.g., scikit-learn, pandas, NumPy, PyTorch, or TensorFlow). Solid experience in data engineering , including ETL pipeline development and data validation. Familiarity with platforms and tools like Databricks , InfluxDB , and Smartbox . Understanding of PLC communication protocols and real-time industrial control systems. Experience with model deployment on edge devices and cloud platforms (AWS, Azure, or GCP). Working knowledge of MLOps , model monitoring , and automated retraining mechanisms .

Posted 3 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Pune

Remote

Naukri logo

Responsibilities: Data Pipeline Design and Implementation: Develop and implement data pipelines that ingest, transform, and load data into a format suitable for machine learning models. Data Quality and Management: Ensure data quality through data cleaning, validation, and transformation processes. Model Development and Deployment: Collaborate with data scientists to develop and deploy machine learning models, including building and testing them. Feature Engineering: Create and select relevant features from the data to improve model performance. Model Monitoring and Maintenance: Monitor model performance and identify areas for improvement. Scalability and Performance: Design and build data infrastructure that can handle large volumes of data and support real-time analytics. Collaboration: Work closely with data scientists, software engineers, and other stakeholders to achieve common goals. Data Security and Compliance: Ensure data security and compliance with relevant regulations. Documentation: Maintain comprehensive documentation for data pipelines and models. Key Skills: Programming Languages: Proficiency in Python Generative Ai: LLM Machine Learning Frameworks: Knowledge of frameworks . Data Warehousing: Experience with data warehousing solutions like Redshift or Panoply. Cloud Computing: Familiarity with cloud platforms like AWS, Azure, or GCP. Data Structures and Algorithms: Strong understanding of data structures and algorithms. Statistical Analysis: Ability to perform statistical analysis and interpret results. Communication and Collaboration: Excellent communication and collaboration skills.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

7 - 14 Lacs

Udaipur, Kolkata, Jaipur

Hybrid

Naukri logo

Senior Data Engineer Kadel Labs is a leading IT services company delivering top-quality technology solutions since 2017, focused on enhancing business operations and productivity through tailored, scalable, and future-ready solutions. With deep domain expertise and a commitment to innovation, we help businesses stay ahead of technological trends. As a CMMI Level 3 and ISO 27001:2022 certified company, we ensure best-in-class process maturity and information security, enabling organizations to achieve their digital transformation goals with confidence and efficiency. Role: Senior Data Engineer Experience: 4-6 Yrs Location: Udaipur , Jaipur,Kolkata Job Description: We are looking for a highly skilled and experienced Data Engineer with 46 years of hands-on experience in designing and implementing robust, scalable data pipelines and infrastructure. The ideal candidate will be proficient in SQL and Python and have a strong understanding of modern data engineering practices. You will play a key role in building and optimizing data systems, enabling data accessibility and analytics across the organization, and collaborating closely with cross-functional teams including Data Science, Product, and Engineering. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT data pipelines using SQL and Python Collaborate with data analysts, data scientists, and product teams to understand data needs Optimize queries and data models for performance and reliability Integrate data from various sources, including APIs, internal databases, and third-party systems Monitor and troubleshoot data pipelines to ensure data quality and integrity Document processes, data flows, and system architecture Participate in code reviews and contribute to a culture of continuous improvement Required Skills: 4-6 years of experience in data engineering, data architecture, or backend development with a focus on data Strong command of SQL for data transformation and performance tuning Experience with Python (e.g., pandas, Spark, ADF) Solid understanding of ETL/ELT processes and data pipeline orchestration Proficiency with RDBMS (e.g., PostgreSQL, MySQL, SQL Server) Experience with data warehousing solutions (e.g., Snowflake, Redshift, BigQuery) Familiarity with version control (Git), CI/CD workflows, and containerized environments (Docker, Kubernetes) Basic Programming Skills Excellent problem-solving skills and a passion for clean, efficient data systems Preferred Skills: Experience with cloud platforms (AWS, Azure, GCP) and services like S3, Glue, Dataflow, etc. Exposure to enterprise solutions (e.g., Databricks, Synapse) Knowledge of big data technologies (e.g., Spark, Kafka, Hadoop) Background in real-time data streaming and event-driven architectures Understanding of data governance, security, and compliance best practices Prior experience working in agile development environment Educational Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Visit us: https://kadellabs.com/ https://in.linkedin.com/company/kadel-labs https://www.glassdoor.co.in/Overview/Working-at-Kadel-Labs-EI_IE4991279.11,21.htm

Posted 3 weeks ago

Apply

3.0 - 5.0 years

8 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

We are looking for an experienced and results-driven Data Engineer to join our growing Data Engineering team. The ideal candidate will be proficient in building scalable, high-performance data transformation pipelines using Snowflake and dbt and be able to effectively work in a consulting setup. In this role, you will be instrumental in ingesting, transforming, and delivering high-quality data to enable data-driven decision-making across the clients organization. Key Responsibilities 1. Design and implement scalable ELT pipelines using dbt on Snowflake, following industry accepted best practices. 2. Build ingestion pipelines from various sources including relational databases, APIs, cloud storage and flat files into Snowflake. 3. Implement data modelling and transformation logic to support layered architecture (e.g., staging, intermediate, and mart layers or medallion architecture) to enable reliable and reusable data assets.. 4. Leverage orchestration tools (e.g., Airflow,dbt Cloud, or Azure Data Factory) to schedule and monitor data workflows. 5. Apply dbt best practices: modular SQL development, testing, documentation, and version control. 6. Perform performance optimizations in dbt/Snowflake through clustering, query profiling, materialization, partitioning, and efficient SQL design. 7. Apply CI/CD and Git-based workflows for version-controlled deployments. 8. Contribute to growing internal knowledge base of dbt macros, conventions, and testing frameworks. 9. Collaborate with multiple stakeholders such as data analysts, data scientists, and data architects to understand requirements and deliver clean, validated datasets. 10. Write well-documented, maintainable code using Git for version control and CI/CD processes. 11. Participate in Agile ceremonies including sprint planning, stand-ups, and retrospectives. 12. Support consulting engagements through clear documentation, demos, and delivery of client-ready solutions. Required Qualifications 3 to 5 years of experience in data engineering roles, with 2+ years of hands-on experience in Snowflake and DBT. Experience building and deploying DBT models in a production environment. Expert-level SQL and strong understanding of ELT principles. Strong understanding of ELT patterns and data modelling (Kimball/Dimensional preferred). Familiarity with data quality and validation techniques: dbt tests, dbt docs etc. Experience with Git, CI/CD, and deployment workflows in a team setting Familiarity with orchestrating workflows using tools like dbt Cloud, Airflow, or Azure Data Factory. Core Competencies: o Data Engineering and ELT Development: Building robust and modular data pipelines using dbt. Writing efficient SQL for data transformation and performance tuning in Snowflake. Managing environments, sources, and deployment pipelines in dbt. o Cloud Data Platform Expertise: Strong proficiency with Snowflake: warehouse sizing, query profiling, data loading, and performance optimization. Experience working with cloud storage (Azure Data Lake, AWS S3, or GCS) for ingestion and external stages. Technical Toolset: o Languages & Frameworks: Python: For data transformation, notebook development, automation. SQL: Strong grasp of SQL for querying and performance tuning. Best Practices and Standards: o Knowledge of modern data architecture concepts including layered architecture (e.g., staging, intermediate, marts, Medallion architecture). Familiarity with data quality, unit testing (dbt tests), and documentation (dbt docs). Security & Governance: o Access and Permissions: Understanding of access control within Snowflake (RBAC), role hierarchies, and secure data handling. Familiar with data privacy policies (GDPR basics), encryption at rest/in transit. Deployment & Monitoring: o DevOps and Automation: Version control using Git, experience with CI/CD practices in a data context. Monitoring and logging of pipeline executions, alerting on failures. Soft Skills: o Communication & Collaboration: Ability to present solutions and handle client demos/discussions. Work closely with onshore and offshore team of analysts, data scientists, and architects. Ability to document pipelines and transformations clearly. Basic Agile/Scrum familiarity working in sprints and logging tasks. Comfort with ambiguity, competing priorities and fast-changing client environment. Education: o Bachelors or masters degree in computer science, Data Engineering, or a related field. o Certifications such as Snowflake SnowPro, dbt Certified Developer Data Engineering are a plus.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

30 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

Role & responsibilities JD Bachelors Degree preferred, or equivalent combination of education, training, and experience. 5+ years of professional experience with SQL, ETL, data modeling, and at least one programming language (e.g., Python, C++, C#, Scala, etc.) 3+ years of professional experience with Enterprise Domains Like HR, Finance, Supply Chain 6+ years of professional experience with more than one SQL and relational databases including expertise in Presto, Spark, and MySQL Professional experience designing and implementing real-time pipelines (Apache Kafka, or similar technologies) 5+ years of professional experience in custom ETL design, implementation, and maintenance 3+ years of professional experience with Data Modeling including expertise in Data Warehouse design and Dimensional Modeling 5+ years of professional experience working with cloud or on-premises Big Data/MPP analytics platform (Teradata, AWS Redshift, Google BigQuery, Azure Synapse Analytics, or similar) Experience with data quality and validation (using Apache Airflow) Experience with anomaly/outlier detection Experience with Data Science workflow (Jupyter Notebooks, Bento, or similar tools) Experience with Airflow or similar workflow management systems Experience querying massive datasets using Spark, Presto, Hive, or similar Experience building systems integrations, tooling interfaces, implementing integrations for ERP systems (Oracle, SAP, Salesforce, etc.) Experience in data visualizations using Power BI and Tableau. Proficiency in Python programming language and Python libraries, with a focus on data engineering and data science applications. Professional fluency in English required Preferred candidate profile

Posted 3 weeks ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Kolkata, Gurugram, Bengaluru

Work from Office

Naukri logo

Job Opportunity for GCP Data Engineer Role: Data Engineer Location: Gurugram/ Bangalore/Kolkata (5 Days work from office) Experience : 4+ Years Key Skills: Data Analysis / Data Preparation - Expert Dataset Creation / Data Visualization - Expert Data Quality Management - Advanced Data Engineering - Advanced Programming / Scripting - Intermediate Data Storytelling- Intermediate Business Analysis / Requirements Analysis - Intermediate Data Dashboards - Foundation Business Intelligence Reporting - Foundation Database Systems - Foundation Agile Methodologies / Decision Support - Foundation Technical Skills: • Cloud - GCP - Expert • Database systems (SQL and NoSQL / Big Query / DBMS) - Expert • Data warehousing solutions - Advanced • ETL Tools - Advanced • Data APIs - Advanced • Python, Java, and Scala etc. - Intermediate • Some knowledge understanding the basics of distributed systems - Foundation • Some knowledge of algorithms and optimal data structures for analytics - Foundation • Soft Skills and time management skills - Foundation

Posted 3 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

We are looking for Indias top 1% Machine Learning Engineers for a unique job opportunity to work with the industry leaders Who can be a part of the community? We are looking for top-tier Machine Learning Engineers with expertise in building, deploying, and optimizing AI models If you have experience in this field then this is your chance to collaborate with industry leaders Whats in it for you? Pay above market standards The role is going to be contract based with project timelines from 2-6 months, or freelancing Be a part of Elite Community of professionals who can solve complex AI challenges Responsibilities: Design, optimize, and deploy machine learning models; implement feature engineering and scaling pipelines Use deep learning frameworks (TensorFlow, PyTorch) and manage models in production (Docker, Kubernetes) Automate workflows, ensure model versioning, logging, and real-time monitoring; comply with security and regulations Work with large-scale data, develop feature stores, and implement CI/CD pipelines for model retraining and performance tracking Required Skills: Proficiency in machine learning, deep learning, and data engineering (Spark, Kafka) Expertise in MLOps, automation tools (Docker, Kubernetes, Kubeflow, MLflow, TFX), and cloud platforms (AWS, GCP, Azure) Strong knowledge of model deployment, monitoring, security, compliance, and responsible AI practices Nice to Have: Experience with A/B testing, Bayesian optimization, and hyperparameter tuning Familiarity with multi-cloud ML deployments and generative AI technologies (LLM fine-tuning, FAISS). Locations : Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, India

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies