Home
Jobs
Companies
Resume

602 Sqoop Jobs - Page 24

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5 - 10 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"

Posted 3 months ago

Apply

6 - 11 years

8 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Develop and maintain scalable data pipelines using AWS services. Optimize data storage and retrieval processes. Ensure data security and compliance with industry standards. Handle large volumes of data, ensuring accuracy, security, and accessibility. Develop data set processes for data modelling, mining, and production. Implement data quality and validation processes. Work closely with data scientists, analysts, and IT departments to understand data requirements. Collaborate with data architects, modelers, and IT team members on project goals. Monitor and troubleshoot data pipeline issues. Conduct performance tuning and optimization of data solutions. Implement disaster recovery procedures. Ensure seamless integration of HR data from various sources into the cloud environment. Research opportunities for data acquisition and new uses for existing data. Stay up to date with the latest cloud technologies and best practices. Recommend ways to improve data reliability, efficiency, and quality. Your Profile 10+ years of experience in cloud data engineering. Proficiency in cloud platforms such as AWS, Azure, or Google Cloud. Experience with data pipeline tools (e.g., Apache Spark, AWS Glue). Strong skills in programming languages like Python, SQL, Java, or Scala. Familiarity with Snowflake or Informatica is an advantage. Knowledge of data privacy laws and security best practices. Ability to analyse and interpret complex data sets. Strong communication and teamwork skills. Knowledge of database technologies, including SQL, Big Data, and Cloud platforms. Demonstrated learner attitude and a keen interest in emerging technologies, particularly in GenAI. Well-versed in working in an Agile framework. Ability to adapt to changing priorities/role and manage multiple projects simultaneously. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 3 months ago

Apply

3 - 8 years

5 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI to improve performance and efficiency, including but not limited to deep learning, neural networks, chatbots, natural language processing. Must have skills : Google Cloud Machine Learning Services Good to have skills : GCP Dataflow, Google Dataproc, Google Pub/Sub Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education br/>Key Responsibilities :A:Implement and maintain data engineering solutions using BigQuery, Dataflow, Vertex AI, Dataproc, and Pub/SubB:Collaborate with data scientists to deploy machine learning modelsC:Ensure the scalability and efficiency of data processing pipelines br/>Technical Experience :A:Expertise in BigQuery, Dataflow, Vertex AI, Dataproc, and Pub/SubB:Hands-on experience with data engineering in a cloud environment br/>Professional Attributes :A:Strong problem-solving skills in optimizing data workflowsB:Effective collaboration with data science and engineering teams Qualifications 15 years full time education

Posted 3 months ago

Apply

3 - 8 years

5 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Python (Programming Language) Good to have skills : Hadoop Administration Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary:As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. Your typical day will involve collaborating with the team to understand the project requirements, designing and developing software solutions, and ensuring the applications are aligned with the business needs. You will also be responsible for troubleshooting and resolving any application issues that arise, as well as providing technical support to end-users. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Design and develop software applications based on business requirements.- Collaborate with the team to understand project requirements and provide technical expertise.- Troubleshoot and resolve any application issues that arise.- Provide technical support to end-users.- Conduct code reviews and ensure adherence to coding standards.- Stay updated with the latest industry trends and technologies. Professional & Technical Skills:- Must To Have Skills:Proficiency in Python (Programming Language).- Good To Have Skills:Experience with Hadoop Administration.- Strong understanding of software development principles and best practices.- Experience with designing and developing applications using Python.- Knowledge of database management systems and SQL.- Familiarity with version control systems such as Git.- Experience with agile development methodologies.- Excellent problem-solving and analytical skills. Additional Information:- The candidate should have a minimum of 3 years of experience in Python (Programming Language).- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualifications 15 years full time education

Posted 3 months ago

Apply

3 - 8 years

5 - 10 Lacs

Noida

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary:As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and optimizing data infrastructure to support the organization's data needs. About The Role ::Responsibilities:1.Develop, optimize, and maintain large-scale data processing pipelines using Apache Spark and Python or PySpark.2.Write and maintain efficient, reusable, and reliable code to ensure the best possible performance and quality.3.Collaborate with data engineers, data scientists, and other stakeholders to design and implement robust data solutions.4.Perform data extraction, transformation, and loading (ETL) operations on large datasets.5.Troubleshoot and resolve issues related to data processing and pipeline performance.6.Implement best practices for data processing, including data validation, error handling, and logging.7.Work with cloud-based data storage and processing platforms such as AWS, Azure, or Google Cloud.8.Stay up to date with the latest industry trends and technologies to ensure the team is using the best tools and techniques available. Qualifications:1.Bachelors degree in computer science, Information Technology, or a related field (or equivalent experience).2.Proven experience as a PySpark Developer or in a similar role.3.Strong proficiency in Python/pyspark programming and experience with Apache Spark.4.Solid understanding of data processing concepts and ETL pipelines.5.Experience with any cloud platforms such as AWS, Azure, or Google Cloud.6.Proficiency in SQL and experience with relational and NoSQL databases.7.Experience with version control systems like Git.8.Familiarity with any big data tools and frameworks (e.g., Hadoop, Kafka).9.Strong problem-solving skills and attention to detail.10.Excellent communication and teamwork skills.11.Familiarity with Agile development methodologies Qualifications 15 years full time education

Posted 3 months ago

Apply

6 - 10 years

8 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 3 months ago

Apply

6 - 10 years

10 - 12 Lacs

Mysore

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 3 months ago

Apply

2 - 5 years

14 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 3 months ago

Apply

2 - 5 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Good Hands on experience in DBT is required. ETL Datastage and snowflake preferred. Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 3 months ago

Apply

2 - 5 years

5 - 8 Lacs

Pune

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Scala object-oriented/object function Strong SQL background Especially, with SparkSQL, Hive SQL Experience with data pipelines Data Lake Strong background in distributed comp Preferred technical and professional experience Optimize Ab Initio graphs for performance, ensuring efficient data processing and minimal resource utilization.. Conduct performance tuning and troubleshooting as needed.. Collaboration:. . Work closely with cross-functional teams, including data analysts, database administrators, and quality assurance, to ensure seamless integration of ETL processes.. Participate in design reviews and provide technical expertise to enhance overall solution quality.. Documentation

Posted 3 months ago

Apply

2 - 5 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. Youll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, youll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, youll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization. Strong experience in building data ingestion and transformation pipelines using Talend to process structured and unstructured data from various sources. Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake capabilities. Hands-on experience with dimensional and relational data modelling techniques to support analytics and reporting requirements. Understanding of optimizing Snowflake workloads, including clustering keys, caching strategies, and query profiling. Ability to implement robust data validation, cleansing, and governance frameworks within ETL processes. Proficiency in SQL and/or Shell scripting for custom transformations and automation tasks." Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization. Strong experience in building data ingestion and transformation pipelines using Talend to process structured and unstructured data from various sources. Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake capabilities. Hands-on experience with dimensional and relational data modelling techniques to support analytics and reporting requirements. Preferred technical and professional experience Understanding of optimizing Snowflake workloads, including clustering keys, caching strategies, and query profiling. Ability to implement robust data validation, cleansing, and governance frameworks within ETL processes. Proficiency in SQL and/or Shell scripting for custom transformations and automation tasks

Posted 3 months ago

Apply

6 - 11 years

8 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back-end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Kochi

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"

Posted 3 months ago

Apply

5 - 8 years

7 - 10 Lacs

Chennai, Pune

Work from Office

Naukri logo

5+ years of hands-on experience in designing, building and supporting Data Applications using Spark, Sqoop and Hive Bachelors or masters degree in Computer Science or related field Strong knowledge of working with large data sets and high-capacity big data processing platform Strong experience in Unix and Shell scripting Advanced knowledge of the Hadoop ecosystem and its components In-depth knowledge of Hive, Shell scripting, Python, Spark Ability to write MapReduce jobs Experience using Job Schedulers like Autosys Hands on experience in HiveQL Good knowledge on Hadoop Architecture and HDFS Strong knowledge of working with large data sets and high-capacity big data processing platform Strong experience in Unix and Shell scripting Experience with jenkins for Continuous Integration Experience using Source Code and Version Control Systems like Bitbucket, Git Good to have experience on Agile Development Responsibilities : Develop components, application interfaces, and solution enablers while ensuring principal architecture integrity is maintained Ensures solutions are well designed with maintainability/ease of integration and testing built-in from the outset Participates and guides team in estimating work necessary to realize a story/requirement through the software delivery lifecycle Responsible for developing and delivering complex software requirements to accomplish business goals Ensures that software is developed to meet functional, non-functional, and compliance requirements Codes solutions, Unit testing and ensure the solution can be integrated successfully into the overall application/system with clear robust, and well-tested interfaces Required Skills : Hadoop, Hive, HDFS, Spark, Python, Un

Posted 3 months ago

Apply

3 - 5 years

5 - 10 Lacs

Pune

Work from Office

Naukri logo

Responsibilities Experience with Scala object-oriented/object function Strong SQL background. Experience in Spark SQL, Hive, Data Engineer. SQL Experience with data pipelines & Data Lake Strong background in distributed comp Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL Experience with data pipelines & Data Lake Strong background in distributed comp Experience with Scala object-oriented/object function Strong SQL background Preferred technical and professional experience Core Scala Development Experience.

Posted 3 months ago

Apply

3 - 7 years

5 - 9 Lacs

Mumbai

Work from Office

Naukri logo

A Data Platform Engineer specialises in the design, build, and maintenance of cloud-based data infrastructure and platforms for data-intensive applications and services. They develop Infrastructure as Code and manage the foundational systems and tools for efficient data storage, processing, and management. This role involves architecting robust and scalable cloud data infrastructure, including selecting and implementing suitable storage solutions, data processing frameworks, and data orchestration tools. Additionally, a Data Platform Engineer ensures the continuous evolution of the data platform to meet changing data needs and leverage technological advancements, while maintaining high levels of data security, availability, and performance. They are also tasked with creating and managing processes and tools that enhance operational efficiency, including optimising data flow and ensuring seamless data integration, all of which are essential for enabling developers to build, deploy, and operate data-centric applications efficiently. Job Description - Grade Specific A senior leadership role that entails the oversight of multiple teams or a substantial team of data platform engineers, the management of intricate data infrastructure projects, and the making of strategic decisions that shape technological direction within the realm of data platform engineering.Key responsibilities encompass:Strategic Leadership: Leading multiple data platform engineering teams, steering substantial projects, and setting the strategic course for data platform development and operations.Complex Project Management: Supervising the execution of intricate data infrastructure projects, ensuring alignment with cliental objectives and the delivery of value.Technical and Strategic Decision-Making: Making well-informed decisions concerning data platform architecture, tools, and processes. Balancing technical considerations with broader business goals.Influencing Technical Direction: Utilising their profound technical expertise in data platform engineering to influence the direction of the team and the client, driving enhancements in data platform technologies and processes.Innovation and Contribution to the Discipline: Serving as innovators and influencers within the field of data platform engineering, contributing to the advancement of the discipline through thought leadership and the sharing of knowledge.Leadership and Mentorship: Offering mentorship and guidance to both managers and technical personnel, cultivating a culture of excellence and innovation within the domain of data platform engineering.

Posted 3 months ago

Apply

3 - 5 years

4 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 3-5 years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"

Posted 3 months ago

Apply

2 - 6 years

9 - 13 Lacs

Pune, Mumbai, Gurgaon

Work from Office

Naukri logo

Manage ETL pipelines, data engineering operations, and cloud infrastructure Experience in configuring data exchange and transfer methods Experience in orchestrating ETL pipelines with multiple tasks, triggers, and dependencies Strong proficiency with Python and Apache Spark; intermediate or better proficiency with SQL; experience with AWS S3 and EC2, Databricks, Ability to communicate efficiently and translate ideas with technical stakeholders in IT and Data Science Passionate about designing data infrastructure and eager to contribute ideas to help build robust data platforms

Posted 3 months ago

Apply

3 - 7 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Databricks Good to have skills : Microsoft Azure Modern Data Platform, Apache Spark Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead for Custom Software Engineering, you will be responsible for designing, building, and configuring applications using Microsoft Azure Databricks. Your typical day will involve leading the effort to deliver high-quality solutions, collaborating with cross-functional teams, and ensuring timely delivery of projects. Roles & Responsibilities: Lead the effort to design, build, and configure applications using Microsoft Azure Databricks. Act as the primary point of contact for the project, collaborating with cross-functional teams to ensure timely delivery of high-quality solutions. Utilize your expertise in Scala Programming Language, Apache Spark, and Microsoft Azure Modern Data Platform to develop and implement efficient and scalable solutions. Ensure adherence to best practices and standards for software development, including code reviews, testing, and documentation.-Build performance-oriented Scala code, optimized for Databricks/Spark execution Provide peer support to other members of Team on Azure data bricks/Spark / Scala best practices Improve Performance of Calculation Engine Develop proof of concepts using new technologies Develop new applications to meet regulatory commitments (e.g:FRTB) Professional & Technical Skills: Proficiency in Scala Programming Language. Experience with Apache Spark and Microsoft Azure Modern Data Platform. Strong understanding of software development best practices and standards. Experience with designing, building, and configuring applications using Microsoft Azure Databricks. Experience with data processing and analysis using big data technologies. Excellent problem-solving and analytical skills. Build performance-oriented Scala code, optimized for Databricks/Spark execution Provide peer support to other members of Team on Azure data bricks/Spark / Scala best practices Improve Performance of Calculation Engine Develop proof of concepts using new technologies Develop new applications to meet regulatory commitments (e.g:FRTB) Qualification 15 years full time education

Posted 3 months ago

Apply

3 - 8 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications using PySpark. Your typical day will involve collaborating with cross-functional teams, developing and deploying PySpark applications, and acting as the primary point of contact for the project. Roles & Responsibilities: Lead the effort to design, build, and configure PySpark applications, collaborating with cross-functional teams to ensure project success. Develop and deploy PySpark applications, ensuring adherence to best practices and standards. Act as the primary point of contact for the project, communicating effectively with stakeholders and providing regular updates on project progress. Provide technical guidance and mentorship to junior team members, ensuring their continued growth and development. Stay updated with the latest advancements in PySpark and related technologies, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Strong experience in PySpark. Good To Have Skills:Experience with Hadoop, Hive, and other Big Data technologies. Solid understanding of software development principles and best practices. Experience with Agile development methodologies. Strong problem-solving and analytical skills. Additional Information: The candidate should have a minimum of 5 years of experience in PySpark. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualification Engineering graduate preferably Computer Science graduate 15 years of full time education

Posted 3 months ago

Apply

3 - 7 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google BigQuery, Google Dataproc Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. This role requires strong leadership skills and the ability to collaborate effectively with cross-functional teams. Roles & Responsibilities: Lead the effort to design, build, and configure applications. Act as the primary point of contact for all application-related matters. Collaborate with cross-functional teams to ensure successful implementation of applications. Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Manage and prioritize tasks to meet project deadlines. Provide technical guidance and mentorship to junior team members. Professional & Technical Skills: - Must To Have Skills:Proficiency in Google Kubernetes Engine, Kubernetes, Google BigQuery, Google Dataproc. - Strong understanding of containerization and orchestration using Google Kubernetes Engine. - Experience with Google Cloud Platform services such as Google BigQuery and Google Dataproc. - Hands-on experience in designing and implementing scalable and reliable applications using Google Kubernetes Engine. - Solid understanding of microservices architecture and its implementation using Kubernetes. - Familiarity with CI/CD pipelines and tools such as Jenkins or GitLab. Additional Information: - The candidate should have a minimum of 3 years of experience in Google Kubernetes Engine. - This position is based at our Bengaluru office. - A 15 years full-time education is required. Qualifications 15 years full time education

Posted 3 months ago

Apply

5 - 9 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : BE Summary :As a Databricks Unified Data Analytics Platform Application Lead, you will be responsible for leading the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve working with the Databricks Unified Data Analytics Platform, collaborating with cross-functional teams, and ensuring the successful delivery of applications. Roles & Responsibilities: Lead the design, development, and deployment of applications using the Databricks Unified Data Analytics Platform. Act as the primary point of contact for all application-related activities, collaborating with cross-functional teams to ensure successful delivery. Ensure the quality and integrity of applications through rigorous testing and debugging. Provide technical guidance and mentorship to junior team members, fostering a culture of continuous learning and improvement. Professional & Technical Skills: Must To Have Skills:Expertise in the Databricks Unified Data Analytics Platform. Good To Have Skills:Experience with other big data technologies such as Hadoop, Spark, and Kafka. Strong understanding of software engineering principles and best practices. Experience with agile development methodologies and tools such as JIRA and Confluence. Proficiency in programming languages such as Python, Java, or Scala. Additional Information: The candidate should have a minimum of 5 years of experience in the Databricks Unified Data Analytics Platform. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Chennai office. Qualifications BE

Posted 3 months ago

Apply

2 - 7 years

4 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NA Minimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary:As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. This role requires strong leadership skills and the ability to effectively communicate with stakeholders and team members. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Lead the design, development, and implementation of applications.- Act as the primary point of contact for all application-related matters.- Collaborate with stakeholders to gather requirements and understand business needs.- Provide technical guidance and mentorship to the development team.- Ensure the successful delivery of high-quality applications.- Identify and mitigate risks and issues throughout the development process. Professional & Technical Skills:- Must To Have Skills:Proficiency in AWS Glue.- Strong understanding of cloud computing concepts and architecture.- Experience with AWS services such as S3, Lambda, and Glue.- Hands-on experience with ETL (Extract, Transform, Load) processes.- Familiarity with data warehousing and data modeling concepts.- Good To Have Skills:Experience with AWS Redshift.- Knowledge of SQL and database management systems.- Experience with data integration and data migration projects. Additional Information:- The candidate should have a minimum of 2 years of experience in AWS Glue.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualifications 15 years full time education

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : BE Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and optimizing data infrastructure to support the organization's data needs. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Design and develop scalable and efficient data pipelines. Implement ETL processes to extract, transform, and load data across systems. Ensure data quality and integrity throughout the data lifecycle. Optimize and manage data infrastructure to support data needs. Collaborate with cross-functional teams to understand data requirements and provide solutions. Stay updated with the latest trends and technologies in data engineering. Conduct performance tuning and optimization of data processes. Troubleshoot and resolve data-related issues in a timely manner. Professional & Technical Skills: Must To Have Skills:Proficiency in Python (Programming Language), AWS, Glue and Pyspark. Experience with data modeling and database design. Strong understanding of data warehousing concepts and techniques. Hands-on experience with ETL tools and frameworks. Knowledge of cloud platforms and services such as AWS or Azure. Experience with big data technologies like Hadoop, Spark, or Kafka. Additional Information: The candidate should have a minimum of 5 years of experience in Python (Programming Language). This position is based at our Chennai office. A BE degree is required. Qualifications BE

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies