Home
Jobs

1885 Data Engineering Jobs - Page 31

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : BE Project Role :Software Development Engineer Project Role Description :Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have Skills :PySparkGood to Have Skills :No Industry SpecializationJob :Key Responsibilities :Overall 8 years of experience working in Data Analytics projects, Work on client projects to deliver AWS, PySpark, Databricks based Data engineering Analytics solutions Build and operate very large data warehouses or data lakes ETL optimization, designing, coding, tuning big data processes using Apache Spark Build data pipelines applications to stream and process datasets at low latencies Show efficiency in handling data - tracking data lineage, ensuring data quality, and improving discoverability of data. Technical Experience :Minimum of 2 years of experience in Databricks engineering solutions on any of the Cloud platforms using PySpark Minimum of 5 years of experience years of experience in ETL, Big Data/Hadoop and data warehouse architecture delivery Minimum 3 year of Experience in one or more programming languages Python, Java, Scala Experience using airflow for the data pipelines in min 1 project 2 years of experience developing CICD pipelines using GIT, Jenkins, Docker, Kubernetes, Shell Scripting, Terraform. Must be able to understand ETL technologies and translate into Cloud (AWS, Azure, Google Cloud) native tools or Pyspark. Professional Attributes :1 Should have involved in data engineering project from requirements phase to delivery 2 Good communication skill to interact with client and understand the requirement 3 Should have capability to work independently and guide the team. Educational Qualification:Additional Info : Qualification BE

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and ensure seamless application functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and implement efficient data pipelines for data processing.- Optimize data storage and retrieval processes to enhance performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Strong understanding of ETL processes and data modeling.- Experience with cloud platforms such as AWS or Azure.- Knowledge of programming languages like Python or Java. Additional Information:- The candidate should have a minimum of 3 years of experience in Data Engineering and flink- The candidate must have Flink knowledge.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications function seamlessly to support organizational goals. You will also participate in testing and refining applications to enhance user experience and efficiency, while staying updated on industry trends and best practices to continuously improve your contributions. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows to ensure clarity and consistency.- Engage in code reviews and provide constructive feedback to peers to foster a culture of continuous improvement. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Strong understanding of data modeling and database design principles.- Experience with ETL processes and data integration techniques.- Familiarity with cloud platforms and services related to data engineering.- Ability to work with big data technologies and frameworks. Additional Information:- The candidate should have minimum 3 years of experience in Data Engineering.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

2 Data Engineer Azure Synapse/ADF , Workiva To manage and maintain the associated Connector, Chains, Tables and Queries, making updates, as needed, as new metrics or requirements are identified Develop functional and technical requirements for any changes impacting wData (Workiva Data) Configure and unit test any changes impacting wData (connector, chains, tables, queries Promote wData changes

Posted 2 weeks ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Ahmedabad

Work from Office

Naukri logo

ANS Group is looking for Senior Data Engineer The job responsibilities of a Senior Data Engineer may include:1 Designing and implementing scalable and reliable data pipelines, data models, and data infrastructure for processing large and complex datasets 2 Developing and maintaining databases, data warehouses, and data lakes that store and manage the organization's data 3 Developing and implementing data integration and ETL (Extract, Transform, Load) processes to ensure that data flows smoothly and accurately between different systems and data sources 4 Ensuring data quality, consistency, and accuracy through data profiling, cleansing, and validation 5 Building and maintaining data processing and analytics systems that support business intelligence, machine learning, and other data-driven applications 6 Optimizing the performance and scalability of data systems and infrastructure to ensure that they can handle the organization's growing data needs To be a successful Senior Data Engineer, one must have in-depth knowledge of database architecture, data modeling, data integration, and ETL processes They should also be proficient in programming languages such as Python, Java, or SQL and have experience working with big data technologies like Hadoop, Spark, and NoSQL databases Strong communication and leadership skills

Posted 2 weeks ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions, and ensuring that applications function seamlessly to support business operations. You will engage in problem-solving and decision-making processes, contributing to the overall success of projects and initiatives. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure alignment with business objectives. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Strong understanding of data pipeline architecture and ETL processes.- Experience with cloud platforms such as AWS or Azure.- Familiarity with data warehousing solutions and big data technologies.- Ability to work with various data storage solutions, including SQL and NoSQL databases. Additional Information:- The candidate should have minimum 5 years of experience in Data Engineering.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be involved in designing, building, and configuring applications to meet business process and application requirements. Your typical day will revolve around creating innovative solutions to address various business needs and ensuring seamless application functionality. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the development and implementation of data engineering solutions- Optimize and maintain data pipelines for optimal performance- Collaborate with data scientists and analysts to understand data requirements Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering- Strong understanding of ETL processes- Experience with cloud-based data platforms such as AWS or Azure- Knowledge of data modeling and database design- Experience with big data technologies like Hadoop or Spark Additional Information:- The candidate should have a minimum of 5 years of experience in Data Engineering- This position is based at our Bengaluru office- A 15 years full-time education is required Qualification 15 years full time education

Posted 2 weeks ago

Apply

5.0 - 7.0 years

8 - 14 Lacs

Pune

Work from Office

Naukri logo

We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

15 - 30 Lacs

Bengaluru

Hybrid

Naukri logo

Qualifications we are looking for Master/Bachelor degree in Computer Science, Electrical Engineering, Information Systems or other technical discipline; advanced degree preferred. Minimum of 7+ years of software development experience (with a concentration in data centric initiatives), with demonstrated expertise in leveraging standard development best practice methodologies. Minimum 4+ years of experience in Hadoop using Core Java Programming, Spark, Scala, Hive and Go lang Expertise in Object Oriented Programming Language Java Experience using CI/CD Process, version control and bug tracking tools. Experience in handling very large data volume in Real Time and batch mode. Experience with automation of job execution and validation Strong knowledge of Database concepts Strong team player. Strong communication skills with proven ability to present complex ideas and document in a clear and concise way. Quick learner; self-starter, detailed and in-depth.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications function seamlessly to support organizational goals. You will also participate in testing and refining applications to enhance user experience and efficiency, while staying updated on industry trends and best practices to continuously improve your contributions. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows to ensure clarity and consistency.- Engage in code reviews and provide constructive feedback to peers to foster a culture of continuous improvement. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Engineering.- Strong understanding of data modeling and ETL processes.- Experience with cloud platforms such as AWS or Azure for data storage and processing.- Familiarity with programming languages such as Python or Java for application development.- Knowledge of database management systems, including SQL and NoSQL databases. Additional Information:- The candidate should have minimum 3 years of experience in Data Engineering and Flink.- The candidate must have Flink knowledge.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :Data Engineering Sr. Advisor demonstrates expertise in data engineering technologies with the focus on engineering, innovation, strategic influence and product mindset. This individual will act as key contributor of the team to design, build, test and deliver large-scale software applications, systems, platforms, services or technologies in the data engineering space. This individual will have the opportunity to work directly with partner IT and business teams, owning and driving major deliverables across all aspects of software delivery.The candidate will play a key role in automating the processes on Databricks and AWS. They collaborate with business and technology partners in gathering requirements, develop and implement. The individual must have strong analytical and technical skills coupled with the ability to positively influence on delivery of data engineering products. The applicant will be working in a team that demands innovation, cloud-first, self-service-first, and automation-first mindset coupled with technical excellence. The applicant will be working with internal and external stakeholders and customers for building solutions as part of Enterprise Data Engineering and will need to demonstrate very strong technical and communication skills.Delivery Intermediate delivery skills including the ability to deliver work at a steady, predictable pace to achieve commitments, decompose work assignments into small batch releases and contribute to tradeoff and negotiation discussions.Domain Expertise Demonstrated track record of domain expertise including the ability to understand technical concepts necessary to do the job effectively, demonstrate willingness, cooperation, and concern for business issues and possess in-depth knowledge of immediate systems worked on.Problem Solving Proven problem solving skills including debugging skills, allowing you to determine source of issues in unfamiliar code or systems and the ability to recognize and solve repetitive problems rather than working around them, recognize mistakes using them as learning opportunities and break down large problems into smaller, more manageable ones& Responsibilities:The candidate will be responsible to deliver business needs end to end from requirements to development into production.Through hands-on engineering approach in Databricks environment, this individual will deliver data engineering toolchains, platform capabilities and reusable patterns.The applicant will be responsible to follow software engineering best practices with an automation first approach and continuous learning and improvement mindset.The applicant will ensure adherence to enterprise architecture direction and architectural standards.The applicant should be able to collaborate in a high-performing team environment, and an ability to influence and be influenced by others.Experience Required:More than 12 years of experience in software engineering, building data engineering pipelines, middleware and API development and automationMore than 3 years of experience in Databricks within an AWS environmentData Engineering experienceExperience Desired:Expertise in Agile software development principles and patternsExpertise in building streaming, batch and event-driven architectures and data pipelinesPrimary Skills: Cloud-based security principles and protocols like OAuth2, JWT, data encryption, hashing data, secret management, etc.Expertise in Big data technologies such as Spark, Hadoop, Databricks, Snowflake, EMR, GlueGood understanding of Kafka, Kafka Streams, Spark Structured streaming, configuration-driven data transformation and curationExpertise in building cloud-native microservices, containers, Kubernetes and platform-as-a-service technologies such as OpenShift, CloudFoundryExperience in multi-cloud software-as-a-service products such as Databricks, SnowflakeExperience in Infrastructure-as-Code (IaC) tools such as terraform and AWS cloudformationExperience in messaging systems such as Apache ActiveMQ, WebSphere MQ, Apache Artemis, Kafka, AWS SNSExperience in API and microservices stack such as Spring Boot, Quarkus, Expertise in Cloud technologies such as AWS Glue, Lambda, S3, Elastic Search, API Gateway, CloudFrontExperience with one or more of the following programming and scripting languages Python, Scala, JVM-based languages, or JavaScript, and ability to pick up new languagesExperience in building CI/CD pipelines using Jenkins, Github ActionsStrong expertise with source code management and its best practicesProficient in self-testing of applications, unit testing and use of mock frameworks, test-driven development (TDD)Knowledge on Behavioral Driven Development (BDD) approachAdditional Skills: Ability to perform detailed analysis of business problems and technical environmentsStrong oral and written communication skillsAbility to think strategically, implement iteratively and estimate financial impact of design/architecture alternativesContinuous focus on an on-going learning and development Qualification 15 years full time education

Posted 2 weeks ago

Apply

8.0 - 11.0 years

15 - 19 Lacs

Kolkata

Work from Office

Naukri logo

Project Role : Technology Architect Project Role Description : Review and integrate all application requirements, including functional, security, integration, performance, quality and operations requirements. Review and integrate the technical architecture requirements. Provide input into final decisions regarding hardware, network products, system software and security. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Cloud Data ArchitectureMinimum 5 year(s) of experience is required Educational Qualification : BE or MCA Summary :As a Technology Architect, you will be responsible for reviewing and integrating all application requirements, including functional, security, integration, performance, quality, and operations requirements. Your typical day will involve reviewing and integrating technical architecture requirements, providing input into final decisions regarding hardware, network products, system software, and security, and utilizing Databricks Unified Data Analytics Platform to deliver impactful data-driven solutions. Roles & Responsibilities6 or more years of experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform. 2 or more years of experience using Python, PySpark or Scala. Experience in Databricks on cloud. Exp in any of AWS, Azure or GCPe, ETL, data engineering, data cleansing and insertion into a data warehouse Must have Skills like Databricks, Cloud Data Architecture, Python Programming Language, Data Engineering. Professional AttributesExcellent writing, communication and presentation skills. Eagerness to learn and develop self on an ongoing basis. Excellent client facing and interpersonal skills. Qualification BE or MCA

Posted 2 weeks ago

Apply

8.0 - 13.0 years

15 - 19 Lacs

Coimbatore

Work from Office

Naukri logo

Project Role : Technology Architect Project Role Description : Review and integrate all application requirements, including functional, security, integration, performance, quality and operations requirements. Review and integrate the technical architecture requirements. Provide input into final decisions regarding hardware, network products, system software and security. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Cloud Data ArchitectureMinimum 7.5 year(s) of experience is required Educational Qualification : BE or MCA Summary :As a Technology Architect, you will be responsible for reviewing and integrating all application requirements, including functional, security, integration, performance, quality, and operations requirements. Your typical day will involve reviewing and integrating technical architecture requirements, providing input into final decisions regarding hardware, network products, system software, and security, and utilizing Databricks Unified Data Analytics Platform to deliver impactful data-driven solutions. Roles & ResponsibilitiesShould have a minimum of 8 years of experience in Databricks Unified Data Analytics Platform. Should have strong educational background in technology and information architectures, along with a proven track record of delivering impactful data-driven solutions. Strong requirement analysis and technical solutioning skill in Data and Analytics Client facing role in terms of running solution workshops, client visits, handled large RFP pursuits and managed multiple stakeholders. Technical Experience10 or more years of experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform. 3 or more years of experience using Python, PySpark or Scala. Experience in Databricks on cloud. Exp in any of AWS, Azure or GCPe, ETL, data engineering, data cleansing and insertion into a data warehouse Must have Skills like Databricks, Cloud Data Architecture, Python Programming Language, Data Engineering. Professional AttributesExcellent writing, communication and presentation skills. Eagerness to learn and develop self on an ongoing basis. Excellent client facing and interpersonal skills. Qualification BE or MCA

Posted 2 weeks ago

Apply

5.0 - 7.0 years

8 - 14 Lacs

Mumbai, Any Location

Work from Office

Naukri logo

We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

16 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

We are Hiring Data Engineer for a US based IT Company Based in Hyderabad. Candidates with minimum 5 Years of experience in Data Engineering can apply. This job is for 1 year contract only Job Title: Data Engineer Location: Hyderabad CTC: Upto 20 LPA Experience: 5+ Years Job Overview: We are looking for a seasoned Senior Data Engineer with deep hands-on experience in Talend and IBM DataStage to join our growing enterprise data team. This role will focus on designing and optimizing complex data integration solutions that support enterprise-wide analytics, reporting, and compliance initiatives. In this senior-level position, you will collaborate with data architects, analysts, and key stakeholders to facilitate large-scale data movement, enhance data quality, and uphold governance and security protocols. Key Responsibilities: Develop, maintain, and enhance scalable ETL pipelines using Talend and IBM DataStage Partner with data architects and analysts to deliver efficient and reliable data integration solutions Review and optimize existing ETL workflows for performance, scalability, and reliability Consolidate data from multiple sourcesboth structured and unstructuredinto data lakes and enterprise platforms Implement rigorous data validation and quality assurance procedures to ensure data accuracy and integrity Adhere to best practices for ETL development, including source control and automated deployment Maintain clear and comprehensive documentation of data processes, mappings, and transformation rules Support enterprise initiatives around data migration , modernization , and cloud transformation Mentor junior engineers and participate in code reviews and team learning sessions Required Qualifications: Minimum 5 years of experience in data engineering or ETL development Proficient with Talend (Open Studio and/or Talend Cloud) and IBM DataStage Strong skills in SQL , data profiling, and performance tuning Experience handling large datasets and complex data workflows Solid understanding of data warehousing , data modeling , and data lake architecture Familiarity with version control systems (e.g., Git) and CI/CD pipelines Strong analytical and troubleshooting skills Effective verbal and written communication, with strong documentation habits Preferred Qualifications: Prior experience in banking or financial services Exposure to cloud platforms such as AWS , Azure , or Google Cloud Knowledge of data governance tools (e.g., Collibra, Alation) Awareness of data privacy regulations (e.g., GDPR, CCPA) Experience working in Agile/Scrum environments For further assistance contact/whatsapp: 9354909518 or write to priya@gist.org.in

Posted 2 weeks ago

Apply

5.0 - 7.0 years

8 - 14 Lacs

Ahmedabad

Work from Office

Naukri logo

We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

8 - 14 Lacs

Nagpur

Work from Office

Naukri logo

We are looking for a skilled Data Engineer with strong hands-on experience in Clickhouse, Kubernetes, SQL, Python, and FastAPI, along with a good understanding of PostgreSQL. The ideal candidate will be responsible for building and maintaining efficient data pipelines, optimizing query performance, and developing APIs to support scalable data services. - Design, build, and maintain scalable and efficient data pipelines and ETL processes. - Develop and optimize Clickhouse databases for high-performance analytics. - Create RESTful APIs using FastAPI to expose data services. - Work with Kubernetes for container orchestration and deployment of data services. - Write complex SQL queries to extract, transform, and analyze data from PostgreSQL and Clickhouse. - Collaborate with data scientists, analysts, and backend teams to support data needs and ensure data quality. - Monitor, troubleshoot, and improve performance of data infrastructure. - Strong experience in Clickhouse - data modeling, query optimization, performance tuning. - Expertise in SQL - including complex joins, window functions, and optimization. - Proficient in Python, especially for data processing (Pandas, NumPy) and scripting. - Experience with FastAPI for creating lightweight APIs and microservices. - Hands-on experience with PostgreSQL - schema design, indexing, and performance. - Solid knowledge of Kubernetes managing containers, deployments, and scaling. - Understanding of software engineering best practices (CI/CD, version control, testing). - Experience with cloud platforms like AWS, GCP, or Azure. - Knowledge of data warehousing and distributed data systems. - Familiarity with Docker, Helm, and monitoring tools like Prometheus/Grafana.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

10 - 15 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

About Us: KPI Partners is a leading provider of data analytics and performance management solutions, dedicated to helping organizations harness the power of their data to drive business success. Our team of experts is at the forefront of the data revolution, delivering innovative solutions to our clients. We are currently seeking a talented and experienced Senior Developer / Lead Data Engineer with expertise in Incorta to join our dynamic team. Job Description: As a Senior Developer / Lead Data Engineer at KPI Partners, you will play a critical role in designing, developing, and implementing data solutions using Incorta. You will work closely with cross-functional teams to understand data requirements, build and optimize data pipelines, and ensure that our data integration processes are efficient and effective. This position requires strong analytical skills, proficiency in Incorta, and a passion for leveraging data to drive business insights. Key Responsibilities: - Design and develop scalable data integration solutions using Incorta. - Collaborate with business stakeholders to gather data requirements and translate them into technical specifications. - Create and optimize data pipelines to ensure high data quality and availability. - Perform data modeling, ETL processes, and data engineering activities to support analytics initiatives. - Troubleshoot and resolve data-related issues across various systems and environments. - Mentor and guide junior developers and data engineers, fostering a culture of learning and collaboration. - Stay updated on industry trends, best practices, and emerging technologies related to data engineering and analytics. - Work with the implementation team to ensure smooth deployment of solutions and provide ongoing support. Qualifications: - Bachelor's or Master's degree in Computer Science, Engineering, Information Systems, or a related field. - 5+ years of experience in data engineering or related roles with a strong focus on Incorta. - Expertise in Incorta and its features, along with experience in data modeling and ETL processes. - Proficiency in SQL and experience with relational databases (e.g., MySQL, Oracle, SQL Server). - Strong analytical and problem-solving skills, with the ability to work with complex data sets. - Excellent communication and collaboration skills to work effectively in a team-oriented environment. - Familiarity with cloud platforms (e.g., AWS, Azure) and data visualization tools is a plus. - Experience with programming languages such as Python, Java, or Scala is advantageous. Why Join KPI Partners? - Opportunity to work with a talented and passionate team in a fast-paced environment. - Competitive salary and benefits package. - Continuous learning and professional development opportunities. - A collaborative and inclusive workplace culture that values diversity and innovation. KPI Partners is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. Join us at KPI Partners and help us unlock the power of data for our clients!

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

Our client is Global IT Service & Consulting Organization Exp-5+ yrs Skil Apache Spark Location- Bangalore, Hyderabad, Pune, Chennai, Coimbatore, Gr. Noida Excellent Knowledge on Spark; The professional must have a thorough understanding Spark framework, Performance Tuning etc Excellent Knowledge and hands-on experience of at least 4+ years in Scala or PySpark Excellent Knowledge of the Hadoop eco System- Knowledge of Hive mandatory Strong Unix and Shell Scripting Skills Excellent Inter-personal skills and for experienced candidates Excellent leadership skills Mandatory for anyone to have Good knowledge of any of the CSPs like Azure,AWS or GCP; Certifications on Azure will be additional Pl

Posted 2 weeks ago

Apply

3.0 - 6.0 years

6 - 10 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

About KPI Partners. KPI Partners is a leading provider of data analytics solutions, dedicated to helping organizations transform data into actionable insights. Our innovative approach combines advanced technology with expert consulting, allowing businesses to leverage their data for improved performance and decision-making. Job Description. We are seeking a skilled and motivated Data Engineer with experience in Databricks to join our dynamic team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and data processing solutions that support our analytics initiatives. You will collaborate closely with data scientists, analysts, and other engineers to ensure the consistent flow of high-quality data across our platforms. Key skills: Python, Pyspark, Databricks, ETL, Cloud (AWS, Azure, or GCP) Key Responsibilities. - Develop, construct, test, and maintain data architectures (e.g., large-scale data processing systems) in Databricks. - Design and implement ETL (Extract, Transform, Load) processes to move and transform data from various sources to target systems. - Collaborate with data scientists and analysts to understand data requirements and design appropriate data models and structures. - Optimize data storage and retrieval for performance and efficiency. - Monitor and troubleshoot data pipelines to ensure reliability and performance. - Engage in data quality assessments, validation, and troubleshooting of data issues. - Stay current with emerging technologies and best practices in data engineering and analytics. Qualifications. - Bachelor's degree in Computer Science, Engineering, Information Technology, or related field. - Proven experience as a Data Engineer or similar role, with hands-on experience in Databricks. - Strong proficiency in SQL and programming languages such as Python or Scala. - Experience with cloud platforms (AWS, Azure, or GCP) and related technologies. - Familiarity with data warehousing concepts and data modeling techniques. - Knowledge of data integration tools and ETL frameworks. - Strong analytical and problem-solving skills. - Excellent communication and teamwork abilities. Why Join KPI Partners? - Be part of a forward-thinking team that values innovation and collaboration. - Opportunity to work on exciting projects across diverse industries. - Continuous learning and professional development opportunities. - Competitive salary and benefits package. - Flexible work environment with hybrid work options. If you are passionate about data engineering and excited about using Databricks to drive impactful insights, we would love to hear from you! KPI Partners is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

18 - 22 Lacs

Gurugram

Hybrid

Naukri logo

Senior data engineer - Python, Pyspark, AWS - 5+ years Gurgaon Summary: An excellent opportunity for someone having a minimum of five years of experience with expertise in building data pipelines. A person must have experience in Python, Pyspark and AWS. Location- Gurgaon (Hybrid) Your Future Employer- One of the largest insurance providers. Responsibilities- To design, develop, and maintain large-scale data pipelines that can handle large datasets from multiple sources. Real-time data replication and batch processing of data using distributed computing platforms like Spark, Kafka, etc. To optimize the performance of data processing jobs and ensure system scalability and reliability. To collaborate with DevOps teams to manage infrastructure, including cloud environments like AWS. To collaborate with data scientists, analysts, and business stakeholders to develop tools and platforms that enable advanced analytics and reporting. Requirements- Hands-on experience with AWS services such as S3, DMS, Lambda, EMR, Glue, Redshift, RDS (Postgres) Athena, Kinesics, etc. Expertise in data modeling and knowledge of modern file and table formats. Proficiency in programming languages such as Python, PySpark, and SQL/PLSQL for implementing data pipelines and ETL processes. Experience data architecting or deploying Cloud/Virtualization solutions (Like Data Lake, EDW, Mart ) in the enterprise. Cloud/hybrid cloud (preferably AWS) solution for data strategy for Data lake, BI and Analytics. What is in for you- A stimulating working environment with equal employment opportunities. Growing of skills while working with industry leaders and top brands. A meritocratic culture with great career progression. Reach us- If you feel that you are the right fit for the role please share your updated CV at randhawa.harmeen@crescendogroup.in Disclaimer- Crescendo Global specializes in Senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with an engaging memorable job search and leadership hiring experience. Crescendo Global does not discriminate on the basis of race, religion, color, origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Posted 2 weeks ago

Apply

8.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

What you’ll be doing: Assist in developing machine learning models based on project requirements Work with datasets by preprocessing, selecting appropriate data representations, and ensuring data quality. Performing statistical analysis and fine-tuning using test results. Support training and retraining of ML systems as needed. Help build data pipelines for collecting and processing data efficiently. Follow coding and quality standards while developing AI/ML solutions Contribute to frameworks that help operationalize AI models What we seek in you: 8+ years of experience in IT Industry Strong on programming languages like Python One cloud hands-on experience (GCP preferred) Experience working with Dockers Environments managing (e.g venv, pip, poetry, etc.) Experience with orchestrators like Vertex AI pipelines, Airflow, etc Understanding of full ML Cycle end-to-end Data engineering, Feature Engineering techniques Experience with ML modelling and evaluation metrics Experience with Tensorflow, Pytorch or another framework Experience with Models monitoring Advance SQL knowledge Aware of Streaming concepts like Windowing, Late arrival, Triggers etc Storage: CloudSQL, Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore, Vector database Ingest: Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!

Posted 2 weeks ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

What you’ll be doing: Assist in developing machine learning models based on project requirements Work with datasets by preprocessing, selecting appropriate data representations, and ensuring data quality. Performing statistical analysis and fine-tuning using test results. Support training and retraining of ML systems as needed. Help build data pipelines for collecting and processing data efficiently. Follow coding and quality standards while developing AI/ML solutions Contribute to frameworks that help operationalize AI models What we seek in you: Strong on programming languages like Python, Java One cloud hands-on experience (GCP preferred) Experience working with Dockers Environments managing (e.g venv, pip, poetry, etc.) Experience with orchestrators like Vertex AI pipelines, Airflow, etc Understanding of full ML Cycle end-to-end Data engineering, Feature Engineering techniques Experience with ML modelling and evaluation metrics Experience with Tensorflow, Pytorch or another framework Experience with Models monitoring Advance SQL knowledge Aware of Streaming concepts like Windowing, Late arrival, Triggers etc Storage: CloudSQL, Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore, Vector database Ingest: Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Job Summary Synechron is seeking a highly skilled Senior Data Engineer specializing in T-SQL and SSIS to lead and advance our data integration and warehousing initiatives. In this role, you will design, develop, and optimize complex ETL processes and database solutions to support enterprise data needs. Your expertise will enable efficient data flow, ensure data integrity, and facilitate actionable insights, contributing to our organizations commitment to data-driven decision-making and operational excellence. Software Requirements Required Skills: Proficiency in SQL Server, advanced T-SQL querying, stored procedures, functions, and scripting Expertise in SQL Server Integration Services (SSIS), including design, deployment, and troubleshooting Deep understanding of data warehousing concepts, schema design, and ETL best practices Experience in performance tuning, query optimization, and troubleshooting SQL and SSIS packages Hands-on experience with database security, compliance, and data masking standards Familiarity with source system analysis and complex data migration Preferred Skills: Experience with cloud platforms (Azure Data Factory, AWS Glue) Knowledge of Azure SQL, Amazon RDS or other cloud-based data services Experience with scripting languages like PowerShell or Python for automation Overall Responsibilities Design, develop, and maintain robust ETL workflows using SSIS to meet diverse data integration requirements Write optimized, scalable T-SQL queries, stored procedures, and functions aligned with data quality standards Develop and manage data warehouse schemas, tables, and relationships to support reporting and analytics Ensure data accuracy, security, and compliance across all data processes Collaborate closely with business analysts and other stakeholders to understand data requirements and translate them into technical solutions Troubleshoot and resolve performance issues in SQL Server and SSIS packages Document data workflows, schemas, and data mappings to support ongoing maintenance and audits Participate in code reviews, performance tuning, and implementing best practices for ETL and database management Support data migration projects and facilitate seamless data transfers between systems Technical Skills (By Category) Programming Languages: Essential: T-SQL, SQL Preferred: PowerShell, Python, or similar scripting languages for automation and scripting Databases & Data Management: Essential: SQL Server (2016 or higher), relational data modeling, ETL processes Preferred: Azure SQL, Amazon RDS, or other cloud-based databases Frameworks & Libraries: Essential: SSIS (SQL Server Integration Services) packages and components Preferred: Data analysis libraries (e.g., Pandas, Power BI integrations) Development Tools & Methodologies: Essential: SQL Server Management Studio (SSMS), Visual Studio, SQL Server Data Tools (SSDT), version control (Git) Preferred: Azure Data Factory, DevOps pipelines, automated deployment tools Design & Architecture: Data warehouse schema design (star schema, snowflake) Data flow and process automation best practices Security & Compliance: Basic understanding of database security, access control, and data masking standards Experience Requirements Minimum of 5+ years working in database development, data warehousing, or ETL processing Proven experience designing and optimizing large-scale ETL workflows in enterprise environments Demonstrated proficiency in writing complex T-SQL queries, stored procedures, and functions Experience with SSIS, including package development, deployment, and troubleshooting Background in data migration and data governance is preferred Industry experience in financial services, banking, or large enterprise environments is advantageous Alternative pathways include extensive hands-on experience or relevant professional training in ETL and database management Day-to-Day Activities Develop, test, and deploy robust ETL workflows using SSIS to meet business needs Build and optimize T-SQL queries, stored procedures, and functions for performance and reliability Analyze source system data structures, identify best-fit schemas, and design data warehouse models Monitor and troubleshoot ETL processes, resolving performance bottlenecks and errors Collaborate with stakeholders to gather requirements and translate them into technical designs Review and optimize database performance and security configurations Document data models, mappings, and processes for operational clarity Engage in regular team meetings, code reviews, and process improvement initiatives Qualifications Bachelors degree or higher in Computer Science, Information Technology, or related field Relevant certifications such as Microsoft Certified: Data Engineer or SQL Server certifications (preferred) Proven track record of designing and maintaining enterprise data warehouses and ETL processes Professional Competencies Critical thinking and analytical skills to troubleshoot complex data issues Strong communication skills for effective stakeholder engagement and documentation Ability to work independently and collaboratively in a fast-paced environment Adaptability to evolving data technologies and requirements Results-oriented with excellent time and priority management Commitment to continuous improvement and learning new data management techniques.

Posted 2 weeks ago

Apply

0.0 - 1.0 years

2 - 4 Lacs

Mumbai

Work from Office

Naukri logo

We are seeking an experienced Data Engineer to join our team for a 6-month contract assignment. The ideal candidate will work on data warehouse development, ETL pipelines, and analytics enablement using Snowflake, Azure Data Factory (ADF), dbt, and other tools. This role requires strong hands-on experience with data integration platforms, documentation, and pipeline optimizationespecially in cloud environments such as Azure and AWS. #KeyResponsibilities Build and maintain ETL pipelines using Fivetran, dbt, and Azure Data Factory Monitor and support production ETL jobs Develop and maintain data lineage documentation for all systems Design data mapping and documentation to aid QA/UAT testing Evaluate and recommend modern data integration tools Optimize shared data workflows and batch schedules Collaborate with Data Quality Analysts to ensure accuracy and integrity of data flows Participate in performance tuning and improvement recommendations Support BI/MDM initiatives including Data Vault and Data Lakes #RequiredSkills 7+ years of experience in data engineering roles Strong command of SQL, with 5+ years of hands-on development Deep experience with Snowflake, Azure Data Factory, dbt Strong background with ETL tools (Informatica, Talend, ADF, dbt, etc.) Bachelor's in CS, Engineering, Math, or related field Experience in healthcare domain (working with PHI/PII data) Familiarity with scripting/programming (Python, Perl, Java, Linux-based environments) Excellent communication and documentation skills Experience with BI tools like Power BI, Cognos, etc. Organized, self-starter with strong time-management and critical thinking abilities #NiceToHave Experience with Data Lakes and Data Vaults QA & UAT alignment with clear development documentation Multi-cloud experience (especially Azure, AWS)

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies