Home
Jobs

602 Sqoop Jobs - Page 22

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3 - 6 years

14 - 18 Lacs

Mysore

Work from Office

Naukri logo

As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total Exp-6-7 Yrs (Relevant-4-5 Yrs) Mandatory Skills: Azure Databricks, Python/PySpark, SQL, Github, - Azure Devops - Azure Blob Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 3 months ago

Apply

7 - 12 years

9 - 14 Lacs

Gurgaon

Work from Office

Naukri logo

Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets MA Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills

Posted 3 months ago

Apply

3 - 7 years

11 - 15 Lacs

Mumbai

Work from Office

Naukri logo

A Data Platform Engineer specialises in the design, build, and maintenance of cloud-based data infrastructure and platforms for data-intensive applications and services. They develop Infrastructure as Code and manage the foundational systems and tools for efficient data storage, processing, and management. This role involves architecting robust and scalable cloud data infrastructure, including selecting and implementing suitable storage solutions, data processing frameworks, and data orchestration tools. Additionally, a Data Platform Engineer ensures the continuous evolution of the data platform to meet changing data needs and leverage technological advancements, while maintaining high levels of data security, availability, and performance. They are also tasked with creating and managing processes and tools that enhance operational efficiency, including optimising data flow and ensuring seamless data integration, all of which are essential for enabling developers to build, deploy, and operate data-centric applications efficiently. Job Description - Grade Specific A senior leadership role that entails the oversight of multiple teams or a substantial team of data platform engineers, the management of intricate data infrastructure projects, and the making of strategic decisions that shape technological direction within the realm of data platform engineering.Key responsibilities encompass:Strategic Leadership: Leading multiple data platform engineering teams, steering substantial projects, and setting the strategic course for data platform development and operations.Complex Project Management: Supervising the execution of intricate data infrastructure projects, ensuring alignment with cliental objectives and the delivery of value.Technical and Strategic Decision-Making: Making well-informed decisions concerning data platform architecture, tools, and processes. Balancing technical considerations with broader business goals.Influencing Technical Direction: Utilising their profound technical expertise in data platform engineering to influence the direction of the team and the client, driving enhancements in data platform technologies and processes.Innovation and Contribution to the Discipline: Serving as innovators and influencers within the field of data platform engineering, contributing to the advancement of the discipline through thought leadership and the sharing of knowledge.Leadership and Mentorship: Offering mentorship and guidance to both managers and technical personnel, cultivating a culture of excellence and innovation within the domain of data platform engineering.

Posted 3 months ago

Apply

10 - 15 years

12 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

Minimum 10 years experience in design, architecture or development in Analytics and Data Warehousing Have experience in solution design, solution governance and implementing end-to-end Big Data solutions using Hadoop eco-systems (Hive, HDFS, Pig, HBase, Flume, Kafka, Sqoop, YARN, Impala) Possess ability to produce semantic, conceptual, logical and physical data models using data modelling techniques such as Data Vault, Dimensional Modelling, 3NF, etc. Has the ability to design data warehousing and enterprise analytics-based solutions using Teradata or relevant data platforms Can demonstrate expertise in design patterns (FSLDM, IBM IFW DW) and data modelling frameworks including dimensional, star and non-dimensional schemas Possess commendable experience in consistently driving cost effective and technologically feasible solutions, while steering solution decisions across the group, to meet both operational and strategic goals is essential. Are adept with abilities to positively influence the adoption of new products, solutions and processes, to align with the existing Information Architectural design would be desirable Have Analytics & Data/BI Architecture appreciation and broad experience across all technology disciplines, including project management, IT strategy development and business process, information, application and business process. Have extensive experience with Teradata data warehouses and Big Data platforms on both On-Prim and Cloud platform. Extensive experience in large enterprise environments handling large volume of datasets with High Service Level Agreement(s) across various business functions/ units. Have experience leading discussions and presentations. Experience in driving decisions across groups of stakeholders.

Posted 3 months ago

Apply

7 - 12 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Business Case: Caspian is the big Data Cluster for NFRT managed and hosted by the Central Data team. It is a critical TIER 1 platform for multiple business functions and processes to operate across NFRT. Given the technology strategy and principles, Data driven design and products are a key pillar and this position is extremely critical to strengthen the current system and continue to build/develop as per the future objectives/strategy of NFRT. :As a Big Data Platform Engineer you will be responsible for the technical delivery of our Data Platform's core functionality and strategic solutions. This includes the development of reusable tooling/API's, applications, data stores, and software stack to accelerate our relational data warehousing, big data analytics and data management needs. This individual will also be responsible for designing and developing strategic solutions that utilize big data, cloud and other modern technologies in order to meet our constantly changing business requirement. Day-to-day management of several small development teams focused on our Big Data platform and Data management applications and Collaboration and co-ordination with multiple stake holders like Hadoop Data Engineering Team, Application Team and Unix Ops Team to ensure the stability of our Big Data platform. Skills Required: Strong technical experience in Scala, Java, Python and Spark for designing , creating and maintaining big data applications.Experience maintaining Cloudera Hadoop infrastructure such as HDFS, YARN, Spark, Impala and edge nodes.Experience with developing Cloud based Big Data solutions on AWS or Azure Strong SQL skills with commensurate experience in a large database platform Experience in complete SDLC process and Agile MethodologyStrong oral and written communication Experience with Cloud Data Platforms like Snowflake or Databricks is an added advantage

Posted 3 months ago

Apply

5 - 10 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

GCP data engineer JD: Proficiency in Google Cloud Platform (GCP) and its associated tools, particularly Big Query. Query certification and scan data using SQL and PowerShell, retrieving ownership information from sources like Active Directory and BigQuery. Leverage Google Cloud Platform tools to manage and process large datasets. Ensure data accuracy and consistency through validation and troubleshooting. Required Skills: Proficiency in Google Cloud Platform (GCP), SQL, and PowerShell. Experience building reports and dashboards in Power BI. Familiarity with data sources like Active Directory. Strong problem-solving and communication skills.

Posted 3 months ago

Apply

3 - 6 years

3 - 6 Lacs

Hyderabad

Work from Office

Naukri logo

Hadoop Admin 1 Position Hadoop administration Automation (Ansible, shell scripting or python scripting) DEVOPS skills (Should be able to code at least in one language preferably python Location: Preferably Bangalore, Otherwise Chennai, Pune, Hyderabad Working Type: Remote

Posted 3 months ago

Apply

3 - 5 years

25 - 30 Lacs

Gurgaon

Work from Office

Naukri logo

Design and manage vector databases for AI agent applications. Develop data pipelines for processing, storing, and retrieving AI-related data. Optimize real-time data streaming and retrieval mechanisms . Collaborate with AI engineers to fine-tune data preprocessing and training datasets . Implement ETL processes for structured and unstructured data processing. Skills Required: Expertise in vector databases (Weaviate, Pinecone, FAISS) . Proficiency in SQL, Python, Apache Spark, and Kafka . Strong understanding of real-time data processing and ETL frameworks . Experience working with AI-driven data architecture . Ability to handle large-scale datasets and optimize query performance .

Posted 3 months ago

Apply

2 - 5 years

4 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title Spark Developer - Immediate Joiner Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Preferred Skills: Technology->Big Data - Data Processing->Spark->Spark Sreaming Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BCA Service Line Data & Analytics Unit* Location of posting is subject to business requirements CLICK TO PROCEED

Posted 3 months ago

Apply

4 - 8 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

JR REQ -BigData Engineer --4to8year---HYD----Karuppiah Mg --- TCS C2H ---900000

Posted 3 months ago

Apply

6 - 11 years

0 - 3 Lacs

Bengaluru

Work from Office

Naukri logo

SUMMARY This is a remote position. Job Description: EMR Admin We are seeking an experienced EMR Admin with expertise in Big data services such as Hive, Metastore, H-base, and Hue. The ideal candidate should also possess knowledge in Terraform and Jenkins. Familiarity with Kerberos and Ansible tools would be an added advantage, although not mandatory. Additionally, candidates with Hadoop admin skills, proficiency in Terraform and Jenkins, and the ability to handle EMR Admin responsibilities are encouraged to apply. Location: Remote Experience: 6+ years Must-Have: The candidate should have 4 years in EMR Admin. Requirements Requirements: Proven experience in EMR administration Proficiency in Big data services including Hive, Metastore, H-base, and Hue Knowledge of Terraform and Jenkins Familiarity with Kerberos and Ansible tools (preferred) Experience in Hadoop administration (preferred)

Posted 3 months ago

Apply

3 - 5 years

12 - 14 Lacs

Delhi NCR, Mumbai, Bengaluru

Work from Office

Naukri logo

We are looking for a highly skilled and motivated Data Engineer to join our dynamic team. In this role, you will collaborate with cross-functional teams to design, build, and maintain scalable data platforms on the AWS Cloud. Youll play a key role in developing next-generation data solutions and optimizing current implementations. Key Responsibilities: Build and maintain high-performance data pipelines using AWS Glue, EMR, Databricks, and Spark. Design and implement robust ETL processes to integrate and analyze large datasets. Develop and optimize data models for reporting, analytics, and machine learning workflows. Use Python, PySpark, and SQL for data transformation and optimization. Ensure data governance, security, and performance on AWS Cloud platforms. Collaborate with stakeholders to translate business needs into technical solutions. Required Skills & Experience: 3-5 years of hands-on experience in data engineering. Proficiency in Python, SQL, and PySpark. Strong knowledge of Big Data ecosystems (Hadoop, Hive, Sqoop, HDFS). Expertise in Spark (Spark Core, Spark Streaming, Spark SQL) and Databricks. Experience with AWS services like EMR, Glue, S3, EC2/EKS, and Lambda. Solid understanding of data modeling, warehousing, and ETL processes. Familiarity with data governance, quality, and security principles. Location - Anywhere in india,hyderabad,ahmedabad,pune,chennai,kolkata.

Posted 3 months ago

Apply

3 - 7 years

11 - 15 Lacs

Mumbai

Work from Office

Naukri logo

Choosing Capgemini means choosing a company where you will be empowered to shape your career in the way youd like, where youll be supported and inspired bya collaborative community of colleagues around the world, and where youll be able to reimagine whats possible. Join us and help the worlds leading organizationsunlock the value of technology and build a more sustainable, more inclusive world. Job Description A Data Platform Engineer specialises in the design, build, and maintenance of cloud-based data infrastructure and platforms for data-intensive applications and services. They develop Infrastructure as Code and manage the foundational systems and tools for efficient data storage, processing, and management. This role involves architecting robust and scalable cloud data infrastructure, including selecting and implementing suitable storage solutions, data processing frameworks, and data orchestration tools. Additionally, a Data Platform Engineer ensures the continuous evolution of the data platform to meet changing data needs and leverage technological advancements, while maintaining high levels of data security, availability, and performance. They are also tasked with creating and managing processes and tools that enhance operational efficiency, including optimising data flow and ensuring seamless data integration, all of which are essential for enabling developers to build, deploy, and operate data-centric applications efficiently. Job Description - Grade Specific A senior leadership role that entails the oversight of multiple teams or a substantial team of data platform engineers, the management of intricate data infrastructure projects, and the making of strategic decisions that shape technological direction within the realm of data platform engineering.Key responsibilities encompass:Strategic Leadership: Leading multiple data platform engineering teams, steering substantial projects, and setting the strategic course for data platform development and operations.Complex Project Management: Supervising the execution of intricate data infrastructure projects, ensuring alignment with cliental objectives and the delivery of value.Technical and Strategic Decision-Making: Making well-informed decisions concerning data platform architecture, tools, and processes. Balancing technical considerations with broader business goals.Influencing Technical Direction: Utilising their profound technical expertise in data platform engineering to influence the direction of the team and the client, driving enhancements in data platform technologies and processes.Innovation and Contribution to the Discipline: Serving as innovators and influencers within the field of data platform engineering, contributing to the advancement of the discipline through thought leadership and the sharing of knowledge.Leadership and Mentorship: Offering mentorship and guidance to both managers and technical personnel, cultivating a culture of excellence and innovation within the domain of data platform engineering.

Posted 3 months ago

Apply

2 - 5 years

14 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Good background on the real-time streaming data integrations Extensive experience with messaging and stream processing on Kafka Experience in creating both producer and consumer applications (data push and pull) Ability to set up and configure Kafka brokers Experience with Kafka Connector and Kafka Streams Preferred technical and professional experience Experience in Kafka security set-up to support connecting different type of sources and applications Experience in pyspark to support building end to end data pipelines Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"

Posted 3 months ago

Apply

3 - 8 years

5 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Data Building Tool Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components and architecture. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work related problems. Collaborate with cross-functional teams to design and implement data platform solutions. Develop and maintain data pipelines for efficient data processing. Optimize data storage and retrieval processes for improved performance. Implement data security measures to protect sensitive information. Conduct regular data platform performance monitoring and troubleshooting. Professional & Technical Skills: Must To Have Skills:Proficiency in Data Building Tool. Strong understanding of data architecture principles. Experience with cloud-based data platforms like AWS or Azure. Hands-on experience with ETL tools for data integration. Knowledge of database management systems such as SQL Server or Oracle. Additional Information: The candidate should have a minimum of 3 years of experience in Data Building Tool. This position is based at our Bengaluru office. A 15 years full time education is required. Qualifications 15 years full time education

Posted 3 months ago

Apply

5 - 10 years

7 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role :Data Platform Engineer Project Role Description :Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills :AWS Glue Good to have skills :NA

Posted 3 months ago

Apply

3 - 8 years

5 - 9 Lacs

Kolkata

Work from Office

Naukri logo

Project Role : Data Governance Practitioner Project Role Description : Establish and enforce data governance policies to ensure the accuracy, integrity, and security of organizational data. Collaborate with key stakeholders to define data standards, facilitate effective data collection, storage, access, and usage; and drive data stewardship initiatives for comprehensive and effective data governance. Must have skills : Teradata BI Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : Required UG Degree Summary :As a Data Governance Practitioner, you will be responsible for establishing and enforcing data governance policies to ensure the accuracy, integrity, and security of organizational data. Your typical day will involve collaborating with key stakeholders to define data standards, facilitate effective data collection, storage, access, and usage, and driving data stewardship initiatives for comprehensive and effective data governance. Roles & Responsibilities: Establish and enforce data governance policies to ensure the accuracy, integrity, and security of organizational data. Collaborate with key stakeholders to define data standards, facilitate effective data collection, storage, access, and usage. Drive data stewardship initiatives for comprehensive and effective data governance. Develop and maintain data governance frameworks, policies, and procedures. Provide guidance and support to business units and IT teams on data governance best practices. Professional & Technical Skills: Must To Have Skills:Proficiency in Teradata BI. Good To Have Skills:Experience with other BI tools. Strong understanding of data governance principles and best practices. Experience in developing and maintaining data governance frameworks, policies, and procedures. Experience in providing guidance and support to business units and IT teams on data governance best practices. Additional Information: The candidate should have a minimum of 3 years of experience in Teradata BI. The ideal candidate will possess a strong educational background in computer science, information technology, or a related field, along with a proven track record of delivering impactful data governance solutions. This position is based at our Kolkata office. Qualification Required UG Degree

Posted 3 months ago

Apply

7 - 12 years

9 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Microsoft Azure Data Services Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : Min 15 years of education Summary :As a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform components. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models, and utilizing Microsoft Azure Data Services to deliver impactful data-driven solutions. Roles & Responsibilities: Assist with the blueprint and design of the data platform components. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Utilize Microsoft Azure Data Services to deliver impactful data-driven solutions. Develop and maintain data pipelines, data models, and data integration solutions. Design and implement data security and privacy measures to ensure data protection and compliance. Professional & Technical Skills: Must To Have Skills:Proficiency in Microsoft Azure Data Services. Strong understanding of data integration, data modeling, and data security. Experience with data pipeline development and maintenance. Experience with data integration solutions. Experience with data security and privacy measures. Additional Information: The candidate should have a minimum of 7.5 years of experience in Microsoft Azure Data Services. The ideal candidate will possess a strong educational background in computer science or a related field. This position is based at our Hyderabad office. Qualification Min 15 years of education

Posted 3 months ago

Apply

3 - 8 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Hadoop Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years of full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using Apache Hadoop. Your typical day will involve working with the Hadoop ecosystem, developing and testing applications, and troubleshooting issues. Roles & Responsibilities: Design, develop, and test applications using Apache Hadoop and related technologies. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Troubleshoot and debug issues in the Hadoop ecosystem, including HDFS, MapReduce, Hive, and Pig. Ensure the performance, scalability, and reliability of applications by optimizing code and configurations. Professional & Technical Skills: Must To Have Skills:Experience with Apache Hadoop. Strong understanding of the Hadoop ecosystem, including HDFS, MapReduce, Hive, and Pig. Experience with Java or Scala programming languages. Familiarity with SQL and NoSQL databases. Experience with data ingestion, processing, and analysis using Hadoop tools like Sqoop, Flume, and Spark. Additional Information: The candidate should have a minimum of 3 years of experience in Apache Hadoop. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Pune office. Qualification 15 years of full time education

Posted 3 months ago

Apply

2 - 4 years

5 - 9 Lacs

Kolkata

Work from Office

Naukri logo

Project Role : Application Designer Project Role Description : Assist in defining requirements and designing applications to meet business process and application requirements. Must have skills : Google BigQuery Good to have skills : PySpark, Bigdata Analytics Architecture and Design, Scala Programming Language, ETL processes, and data wareh, Data Engineer with a focus on Minimum 2 year(s) of experience is required Educational Qualification : Btech Summary :As an Application Designer, you will be responsible for assisting in defining requirements and designing applications to meet business process and application requirements. Your typical day will involve working with Google BigQuery and utilizing your skills in Scala Programming Language, PySpark, ETL processes, and data warehousing to design and develop data-driven solutions. Roles & Responsibilities: Design and develop applications using Google BigQuery to meet business process and application requirements. Collaborate with cross-functional teams to define requirements and ensure that applications are designed to meet business needs. Utilize your skills in Scala Programming Language, PySpark, ETL processes, and data warehousing to design and develop data-driven solutions. Ensure that applications are designed to be scalable, reliable, and maintainable. Stay updated with the latest advancements in Big Data technologies and integrate innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Proficiency in Google BigQuery. Good To Have Skills:Scala Programming Language, Bigdata Analytics Architecture and Design, PySpark, ETL processes, and data warehousing. Solid understanding of data engineering principles and best practices. Experience in designing and developing data-driven solutions. Experience in working with large datasets and designing scalable solutions. Strong problem-solving skills and ability to work in a fast-paced environment. Additional Information: The candidate should have a minimum of 2 years of experience in Google BigQuery. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Hyderabad office. Qualification Btech

Posted 3 months ago

Apply

5 - 7 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Microsoft Azure Modern Data Platform Good to have skills : PySpark Minimum 7.5 year(s) of experience is required Educational Qualification : A:15 Years of full time education Project Role :Data Platform Engineer Project Role Description :Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have Skills :Microsoft Azure Modern Data Platform, SSI: NON SSI:Good to Have Skills :SSI:PySpark NON SSI :Job Requirements :Key Responsibilities :1 Show a strong development skill in Pyspark and Databrick to build complex data pipelines 2 Should be able to deliver the development task assigned independently 3 Should be able to participate in daily status calls and have good communication skills to manage day to day work Technical Experience :1 Should have more than 5 years of experience in IT 2 Should have more than 2 years of experience in technologies like Pyspark and Databricks 3 Should be able to build end to end pipelines using Pyspark with good knowledge on Delta Lake 4 Should have good knowledge on Azure services like Azure Data Factory, Azure storage solutions like ADLS, Delta Lake, Azure AD Professional Attributes :A:Good Communication skills and other soft Skills Educational Qualification:A:15 Years of full time educationAdditional Info : Qualifications A: 15 Years of full time education

Posted 3 months ago

Apply

3 - 5 years

9 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Microsoft Azure Data Services Good to have skills : Microsoft Azure Modern Data Platform, Python (Programming Language), Databricks Unified Data Analytics Platform Minimum 3 year(s) of experience is required Educational Qualification : Minimum 15 years of education Summary :As a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform, ensuring cohesive integration between systems and data models. Your typical day will involve working with Microsoft Azure Data Services, collaborating with Integration Architects and Data Architects, and utilizing your expertise in data platform components. Roles & Responsibilities: Assist with the blueprint and design of the data platform, encompassing the relevant data platform components. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Implement and maintain data platform components using Microsoft Azure Data Services. Develop and maintain data pipelines, ensuring data quality and integrity. Troubleshoot and resolve data platform issues, working with cross-functional teams as needed. Professional & Technical Skills: Must To Have Skills:Expertise in Microsoft Azure Data Services. Good To Have Skills:Experience with Microsoft Azure Modern Data Platform, Databricks Unified Data Analytics Platform, and Python (Programming Language). Strong understanding of data platform components and architecture. Experience with data pipeline development and maintenance. Ability to troubleshoot and resolve data platform issues. Solid grasp of data quality and integrity best practices. Additional Information: The candidate should have a minimum of 3 years of experience in Microsoft Azure Data Services. The ideal candidate will possess a strong educational background in computer science, data engineering, or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Hyderabad office. Qualification Minimum 15 years of education

Posted 3 months ago

Apply

3 - 7 years

9 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Microsoft Azure Data Services Good to have skills : Microsoft Azure Modern Data Platform, Python (Programming Language), Databricks Unified Data Analytics Platform Minimum 3 year(s) of experience is required Educational Qualification : Minimum 15 years of education Summary :As a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform, ensuring cohesive integration between systems and data models. Your typical day will involve working with Microsoft Azure Data Services, collaborating with Integration Architects and Data Architects, and utilizing your expertise in data platform components. Roles & Responsibilities: Assist with the blueprint and design of the data platform, encompassing the relevant data platform components. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Implement and maintain data platform components using Microsoft Azure Data Services. Develop and maintain data pipelines, ensuring data quality and integrity. Troubleshoot and resolve data platform issues, working with cross-functional teams as needed. Professional & Technical Skills: Must To Have Skills:Expertise in Microsoft Azure Data Services. Good To Have Skills:Experience with Microsoft Azure Modern Data Platform, Databricks Unified Data Analytics Platform, and Python (Programming Language). Strong understanding of data platform components and architecture. Experience with data pipeline development and maintenance. Ability to troubleshoot and resolve data platform issues. Solid grasp of data quality and integrity best practices. Additional Information: The candidate should have a minimum of 3 years of experience in Microsoft Azure Data Services. The ideal candidate will possess a strong educational background in computer science, data engineering, or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Hyderabad office. Qualification Minimum 15 years of education

Posted 3 months ago

Apply

5 - 9 years

10 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Scala Programming Language Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : NA Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications using Scala Programming Language. Your typical day will involve collaborating with cross-functional teams, managing project timelines, and ensuring the successful delivery of high-quality software solutions. Roles & Responsibilities: Lead the design, development, and deployment of software applications using Scala Programming Language. Collaborate with cross-functional teams to identify and prioritize project requirements, ensuring timely delivery of high-quality software solutions. Manage project timelines and resources, ensuring successful project delivery within budget and scope. Provide technical leadership and mentorship to junior team members, promoting a culture of continuous learning and improvement. Stay up-to-date with emerging trends and technologies in software engineering, applying innovative approaches to drive sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Proficiency in Scala Programming Language. Good To Have Skills:Experience with Java, Python, or other programming languages. Strong understanding of software engineering principles and best practices. Experience with Agile development methodologies and tools such as JIRA or Confluence. Experience with cloud-based technologies such as AWS or Azure. Solid grasp of database technologies such as SQL or NoSQL. Additional Information: The candidate should have a minimum of 5 years of experience in Scala Programming Language. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful software solutions. This position is based at our Hyderabad office. Qualification NA

Posted 3 months ago

Apply

Exploring Sqoop Jobs in India

India has seen a rise in demand for professionals skilled in Sqoop, a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases. Job seekers with expertise in Sqoop can explore various opportunities in the Indian job market.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Sqoop professionals in India varies based on experience levels: - Entry-level: Rs. 3-5 lakhs per annum - Mid-level: Rs. 6-10 lakhs per annum - Experienced: Rs. 12-20 lakhs per annum

Career Path

Typically, a career in Sqoop progresses as follows: 1. Junior Developer 2. Sqoop Developer 3. Senior Developer 4. Tech Lead

Related Skills

In addition to expertise in Sqoop, professionals in this field are often expected to have knowledge of: - Apache Hadoop - SQL - Data warehousing concepts - ETL tools

Interview Questions

  • What is Sqoop and why is it used? (basic)
  • Explain the difference between Sqoop import and Sqoop export commands. (medium)
  • How can you perform incremental imports using Sqoop? (medium)
  • What are the limitations of Sqoop? (medium)
  • What is the purpose of the metastore in Sqoop? (advanced)
  • Explain the various options available in the Sqoop import command. (medium)
  • How can you schedule Sqoop jobs in a production environment? (advanced)
  • What is the role of the Sqoop connector in data transfer? (medium)
  • How does Sqoop handle data consistency during imports? (medium)
  • Can you use Sqoop with NoSQL databases? If yes, how? (advanced)
  • What are the different file formats supported by Sqoop for importing and exporting data? (basic)
  • Explain the concept of split-by column in Sqoop. (medium)
  • How can you import data directly into Hive using Sqoop? (medium)
  • What are the security considerations while using Sqoop? (advanced)
  • How can you improve the performance of Sqoop imports? (medium)
  • Explain the syntax of the Sqoop export command. (basic)
  • What is the significance of boundary queries in Sqoop? (medium)
  • How does Sqoop handle data serialization and deserialization? (medium)
  • What are the different authentication mechanisms supported by Sqoop? (advanced)
  • How can you troubleshoot common issues in Sqoop imports? (medium)
  • Explain the concept of direct mode in Sqoop. (medium)
  • What are the best practices for optimizing Sqoop performance? (advanced)
  • How does Sqoop handle data types mapping between Hadoop and relational databases? (medium)
  • What are the differences between Sqoop and Flume? (basic)
  • How can you import data from a mainframe into Hadoop using Sqoop? (advanced)

Closing Remark

As you explore job opportunities in the field of Sqoop in India, make sure to prepare thoroughly and showcase your skills confidently during interviews. Stay updated with the latest trends and advancements in Sqoop to enhance your career prospects. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies