Home
Jobs

602 Sqoop Jobs - Page 20

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3 - 7 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google BigQuery, Google Dataproc Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. This role requires strong leadership skills and the ability to collaborate effectively with cross-functional teams. Roles & Responsibilities: Lead the effort to design, build, and configure applications. Act as the primary point of contact for all application-related matters. Collaborate with cross-functional teams to ensure successful implementation of applications. Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Manage and prioritize tasks to meet project deadlines. Provide technical guidance and mentorship to junior team members. Professional & Technical Skills: - Must To Have Skills:Proficiency in Google Kubernetes Engine, Kubernetes, Google BigQuery, Google Dataproc. - Strong understanding of containerization and orchestration using Google Kubernetes Engine. - Experience with Google Cloud Platform services such as Google BigQuery and Google Dataproc. - Hands-on experience in designing and implementing scalable and reliable applications using Google Kubernetes Engine. - Solid understanding of microservices architecture and its implementation using Kubernetes. - Familiarity with CI/CD pipelines and tools such as Jenkins or GitLab. Additional Information: - The candidate should have a minimum of 3 years of experience in Google Kubernetes Engine. - This position is based at our Bengaluru office. - A 15 years full-time education is required. Qualifications 15 years full time education

Posted 2 months ago

Apply

5 - 9 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : BE Summary :As a Databricks Unified Data Analytics Platform Application Lead, you will be responsible for leading the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve working with the Databricks Unified Data Analytics Platform, collaborating with cross-functional teams, and ensuring the successful delivery of applications. Roles & Responsibilities: Lead the design, development, and deployment of applications using the Databricks Unified Data Analytics Platform. Act as the primary point of contact for all application-related activities, collaborating with cross-functional teams to ensure successful delivery. Ensure the quality and integrity of applications through rigorous testing and debugging. Provide technical guidance and mentorship to junior team members, fostering a culture of continuous learning and improvement. Professional & Technical Skills: Must To Have Skills:Expertise in the Databricks Unified Data Analytics Platform. Good To Have Skills:Experience with other big data technologies such as Hadoop, Spark, and Kafka. Strong understanding of software engineering principles and best practices. Experience with agile development methodologies and tools such as JIRA and Confluence. Proficiency in programming languages such as Python, Java, or Scala. Additional Information: The candidate should have a minimum of 5 years of experience in the Databricks Unified Data Analytics Platform. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Chennai office. Qualifications BE

Posted 2 months ago

Apply

7 - 8 years

15 - 25 Lacs

Chennai

Work from Office

Naukri logo

Assistant Manager - Data Engineering: Job Summary: We are seeking a Lead GCP Data Engineer with experience in data modeling and building data pipelines. The ideal candidate should have hands-on experience with GCP services such as Composer, GCS, GBQ, Dataflow, Dataproc, and Pub/Sub. Additionally, the candidate should have a proven track record in designing data solutions, covering everything from data integration to end-to-end storage in bigquery. Responsibilities: Collaborate with Client's Data Architect: Work closely with client data architects and technical teams to design and develop customized data solutions that meet business requirements. Design Data Flows: Architect and implement data flows that ensure seamless data movement from source systems to target systems, facilitating real-time or batch data ingestion, processing, and transformation. Data Integration & ETL Processes: Design and manage ETL processes, ensuring the efficient integration of diverse data sources and high-quality data pipelines. Build Data Products in GBQ: Work on building data products using Google BigQuery (GBQ), designing data models and ensuring data is structured and optimized for analysis. Stakeholder Interaction: Regularly engage with business stakeholders to gather data requirements and translate them into technical specifications, building solutions that align with business needs. Ensure Data Quality & Security: Implement best practices in data governance, security, and compliance for both storage and processing of sensitive data. Continuous Improvement: Evaluate and recommend new technologies and tools to improve data architecture, performance, and scalability. Skills: 6+ years of development experience 4+ years of experience with SQL, Python 2+ GCP BigQuery, DataFlow, GCS, Postgres 3+ years of experience building out data pipelines from scratch in a highly distributed and fault-tolerant manner. Experience with CloudSQL, Cloud Functions and Pub/Sub, Cloud Composer etc., Familiarity with big data and machine learning tools and platforms. Comfortable with open source technologies including Apache Spark, Hadoop, Kafka. Comfortable with a broad array of relational and non-relational databases. Proven track record of building applications in a data-focused role (Cloud and Traditional Data Warehouse) Current or previous experience leading a team. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status.

Posted 2 months ago

Apply

2 - 7 years

4 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NA Minimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary:As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. This role requires strong leadership skills and the ability to effectively communicate with stakeholders and team members. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Lead the design, development, and implementation of applications.- Act as the primary point of contact for all application-related matters.- Collaborate with stakeholders to gather requirements and understand business needs.- Provide technical guidance and mentorship to the development team.- Ensure the successful delivery of high-quality applications.- Identify and mitigate risks and issues throughout the development process. Professional & Technical Skills:- Must To Have Skills:Proficiency in AWS Glue.- Strong understanding of cloud computing concepts and architecture.- Experience with AWS services such as S3, Lambda, and Glue.- Hands-on experience with ETL (Extract, Transform, Load) processes.- Familiarity with data warehousing and data modeling concepts.- Good To Have Skills:Experience with AWS Redshift.- Knowledge of SQL and database management systems.- Experience with data integration and data migration projects. Additional Information:- The candidate should have a minimum of 2 years of experience in AWS Glue.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualifications 15 years full time education

Posted 2 months ago

Apply

12 - 16 years

3 - 7 Lacs

Kolkata

Work from Office

Naukri logo

Project Role : Data Management Practitioner Project Role Description : Maintain the quality and compliance of an organizations data assets. Design and implement data strategies, ensuring data integrity and enforcing governance policies. Establish protocols to handle data, safeguard sensitive information, and optimize data usage within the organization. Design and advise on data quality rules and set up effective data compliance policies. Must have skills : Data Architecture Principles Good to have skills : NA Minimum 12 year(s) of experience is required Educational Qualification : any graduate Summary :As a Data Management Practitioner, you will be responsible for designing and implementing data strategies, ensuring data integrity, and enforcing governance policies. Your typical day will involve maintaining the quality and compliance of an organization's data assets, establishing protocols to handle data, safeguarding sensitive information, and optimizing data usage within the organization. Roles & Responsibilities: Design and implement data strategies, ensuring data integrity and enforcing governance policies. Maintain the quality and compliance of an organization's data assets. Establish protocols to handle data, safeguard sensitive information, and optimize data usage within the organization. Design and advise on data quality rules and set up effective data compliance policies. Professional & Technical Skills: Must To Have Skills:Strong understanding of Data Architecture Principles. Good To Have Skills:Experience with data governance tools and technologies. Experience in designing and implementing data strategies. Experience in establishing protocols to handle data, safeguard sensitive information, and optimize data usage within the organization. Experience in designing and advising on data quality rules and setting up effective data compliance policies. Additional Information: The candidate should have a minimum of 12 years of experience in Data Architecture Principles. The ideal candidate will possess a strong educational background in computer science, information technology, or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Kolkata office. Qualifications any graduate

Posted 2 months ago

Apply

7 - 10 years

9 - 13 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Roles & Responsibilities: Assist with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Develop and maintain data pipelines using Databricks Unified Data Analytics Platform. Design and implement data security and access controls using Databricks Unified Data Analytics Platform. Troubleshoot and resolve issues related to data platform components using Databricks Unified Data Analytics Platform. Professional & Technical Skills: Must To Have Skills:Experience with Databricks Unified Data Analytics Platform. Must To Have Skills:Strong understanding of data platform components and architecture. Good To Have Skills:Experience with cloud-based data platforms such as AWS or Azure. Good To Have Skills:Experience with data security and access controls. Good To Have Skills:Experience with data pipeline development and maintenance. Additional Information: The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions.-This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualification Engineering graduate preferably Computer Science graduate 15 years of full time education

Posted 2 months ago

Apply

5 - 10 years

10 - 14 Lacs

Pune

Work from Office

Naukri logo

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications using PySpark. Your typical day will involve collaborating with cross-functional teams, developing and deploying PySpark applications, and acting as the primary point of contact for the project. Roles & Responsibilities: Lead the effort to design, build, and configure PySpark applications, collaborating with cross-functional teams to ensure project success. Develop and deploy PySpark applications, ensuring adherence to best practices and standards. Act as the primary point of contact for the project, communicating effectively with stakeholders and providing regular updates on project progress. Provide technical guidance and mentorship to junior team members, ensuring their continued growth and development. Stay updated with the latest advancements in PySpark and related technologies, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Strong experience in PySpark. Good To Have Skills:Experience with Hadoop, Hive, and other Big Data technologies. Solid understanding of software development principles and best practices. Experience with Agile development methodologies. Strong problem-solving and analytical skills. Additional Information: The candidate should have a minimum of 5 years of experience in PySpark. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualification Engineering graduate preferably Computer Science graduate 15 years of full time education

Posted 2 months ago

Apply

7 - 10 years

9 - 13 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Roles & Responsibilities: Assist with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Develop and maintain data pipelines using Databricks Unified Data Analytics Platform. Design and implement data security and access controls using Databricks Unified Data Analytics Platform. Troubleshoot and resolve issues related to data platform components using Databricks Unified Data Analytics Platform. Professional & Technical Skills: Must To Have Skills:Experience with Databricks Unified Data Analytics Platform. Good To Have Skills:Experience with other big data technologies such as Hadoop, Spark, and Kafka. Strong understanding of data modeling and database design principles. Experience with data security and access controls. Experience with data pipeline development and maintenance. Experience with troubleshooting and resolving issues related to data platform components. Additional Information: The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualification Engineering graduate preferably Computer Science graduate 15 years of full time education

Posted 2 months ago

Apply

8 - 13 years

30 - 35 Lacs

Mumbai

Work from Office

Naukri logo

Paramatrix Technologies Pvt Ltd is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 months ago

Apply

4 - 9 years

6 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

About The Role Basic Qualifications : 4+ years of experience in data processing & software engineering and can build highquality, scalable data-oriented products. Experience on distributed data technologies (e.g. Hadoop, MapReduce, Spark, EMR, etc..) for building efficient, large-scale data pipelines. Strong Software Engineering experience with in-depth understanding of Python, Scala, Java or equivalent. Strong understanding of data architecture, modeling and infrastructure. Experience with building workflows (ETL pipelines). Experience with SQL and optimizing queries. Problem solver with attention to detail who can see complex problems in the data space through end to end. Willingness to work in a fast-paced environment. MS/BS in Computer Science or relevant industry experience. Preferred Qualifications : Experience building scalable applications on the Cloud (Amazon AWS, Google Cloud, etc.). Experience building stream-processing applications (Spark streaming, Apache-Flink, Kafka, etc.)

Posted 2 months ago

Apply

2 - 7 years

4 - 9 Lacs

Karnataka

Work from Office

Naukri logo

Description Skills: Proficiency in SQL is a must. PL/SQL to understand integration SP part. Experience in PostgreSQL is must. Basic knowledge of Google Cloud Composer ( or Apache Airflow). Composer is managed GCP service for Apache Airflow. All pipelines are orchestrated and scheduled through Composer GCP basics-high level understanding of using GCP UI and services like Cloud SQL PostgreSQL Cloud Composer Cloud Storage Dataproc Airlfow DAGs are written in Python basic knowledge of Python code for DAGs Dataproc is Managed Spark in GCP so a bit of PySpark knowledge is also nice to have. Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family 60235 (P) Sales, Account Management & Solution design Local Role Name 6480 Account Manager Local Skills 6170 SQL Languages RequiredEnglish Role Rarity To Be Defined

Posted 2 months ago

Apply

2 - 7 years

4 - 9 Lacs

Andhra Pradesh

Work from Office

Naukri logo

Description 1.Hands on industry experience in design and coding from scratch in AWS Glue-Pyspark with services like S3 DynamoDB StepFunctions etc. 2.Hands on industry experience in design and coding from scratch in Snowflake 3.Experience in Pyspark/Snowflake 1 to 3 years with overall around 5 years of experience in building data/analytics solutions Level Senior Consultant or below Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility Yes Global Role Family 60236 (P) Software Engineering Local Role Name 6361 Software Engineer Local Skills 59383 AWS Glue Languages RequiredEnglish Role Rarity To Be Defined

Posted 2 months ago

Apply

2 - 7 years

4 - 9 Lacs

Dadra and Nagar Haveli, Chandigarh, Daman

Work from Office

Naukri logo

JR REQ --Azure Data Bricks, Pyspark--- 3-6years -- All persis locations---Vaishali Rai Location - Chandigarh,Dadra & Nagar Haveli,Daman,Diu,Goa,Haveli,Jammu,Lakshadweep,Nagar,New Delhi,Puducherry,Sikkim

Posted 2 months ago

Apply

2 - 7 years

4 - 9 Lacs

Dadra and Nagar Haveli, Chandigarh, Daman

Work from Office

Naukri logo

Databricks and Bigdata Client persistent Role; C2H AMVikas Exp8+ years Budget:30 LPA Location POC:Swathi Patil About The Role 1.Mandatory Skills - Big Data, Data Bricks, SQL, Python, Databricks Sr Data Engineer JD as below-Must have 8+ years of total experience. Hands-on experience in SQL, Python, Pyspark, AWS Databricks. Good knowledge of Big Data, Data Warehousing concepts. Good knowledge of GIT, CI/CD Customer-focused, react well to changes, work with teams and able to multi-task. Must be a proven performer and team player that enjoy challenging assignments in a high-energy, fast growing and start-up workplace. Must be a self-starter who can work well with minimal guidance and in fluid environment. Act as a technical mentor and guide/support junior team members technically. 2.Mandatory Skills - SQL, Python, Big Data, PySpark, Data Warehousing, Team Player, Databricks ,AWS, GIT, React, Big Data, Data Bricks Must have 8+ years of total experience. Hands-on experience in SQL, Python, Pyspark, AWS Databricks. Good knowledge of Big Data, Data Warehousing concepts. Good knowledge of GIT, CI/CD Customer-focused, react well to changes, work with teams and able to multi-task. Must be a proven performer and team player that enjoy challenging assignments in a high-energy, fast growing and start-up workplace. Must be a self-starter who can work well with minimal guidance and in fluid environment. Act as a technical mentor and guide/support junior team members technically Location - Chandigarh,Dadra & Nagar Haveli,Daman,Diu,Goa,Haveli,Jammu,Lakshadweep,Nagar,New Delhi,Puducherry,Sikkim

Posted 2 months ago

Apply

2 - 7 years

4 - 9 Lacs

Dadra and Nagar Haveli, Chandigarh

Work from Office

Naukri logo

Data engineer Skills Required: Strong proficiency in Pyspark , Scala , and Python Experience with AWS Glue Experience Required: Minimum 5 years of relevant experience Location: Available across all UST locations Notice Period: Immediate joiners (Candidates available to join by 31st January 2025 ) SO - 22978624 Location - Chandigarh,Dadra & Nagar Haveli,Daman,Diu,Goa,Haveli,Jammu,Lakshadweep,Nagar,New Delhi,Puducherry,Sikkim

Posted 2 months ago

Apply

2 - 7 years

4 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

JR REQ---Data Engineer(Pyspark, Big mailto:data)--4to8year---hyd----hemanth.karanam@tcs.com----TCS C2H ---900000

Posted 2 months ago

Apply

4 - 6 years

6 - 8 Lacs

Pune

Work from Office

Naukri logo

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your role Analyse and organize raw data. Build data systems and pipelines. Evaluate business needs and objectives. Interpret trends and patterns. Conduct complex data analysis and report on results. Prepare data for prescriptive and predictive modelling. Build algorithms and prototypes. Combine raw information from different sources. Explore ways to enhance data quality and reliability. Identify opportunities for data acquisition. Develop analytical tools and programs. Collaborate with data scientists and architects on several projects. Participate in code peer reviews to ensure our applications comply with best practices. Your Profile Experience with any Big Data toolsHadoop, Spark, Kafka, Sqoop, Flume, Hive etc. Experience with any relational SQL and NoSQL databases, including Postgres, Cassandra, Sql Server, Oracle, Snowflake. Experience with any data pipeline and workflow management toolsAzkaban, Luigi, Airflow, etc. Experience in any Cloud platformsAzure, AWS or GCP. Experience with stream-processing systemsStorm, Spark-Streaming, etc. Experience with object-oriented/object function scripting languagesPython, Java, C++, Scala, etc Must have hands-on experience in DevOps and CI/CD deployments. Should know basic and advance SQL and can write complex queries. Strong experience into data warehousing and dimensional modelling. Should be a very good team player to work in a geographically dispersed team . What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 2 months ago

Apply

5 - 8 years

7 - 10 Lacs

Mumbai

Work from Office

Naukri logo

Skill required: Talent & HR - Talent Management Designation: PPSM Sr Analyst Qualifications: Any Graduation Years of Experience: 5 to 8 years What would you do? Improve workforce performance and productivity, boosts business agility, increases revenue and reduces costsTalent & HR processSupport workforce behavior in alignment with the organization`s business strategy by designing, developing, implementing, and executing key HR processes:strategic planning; supply demand; hiring and sourcing; on-boarding and integration; training and development; objective-setting and performance management; and compensation and rewards. What are we looking for? Microsoft Project Plan /ADO Maintenance:oExtract data from ADO dashboards and update demand plans daily.oCreate new line items as requested, assign buckets, and set baselines.oProvide PWA access to new joiners and address access-related queries. Onboarding/Offboarding Support:oGuide project managers (PMs) and leads regarding onboarding/offboarding.oCoordinate with relevant teams for laptop/security key requests (Chromebooks, Windows, MacBooks). Reporting:oSend fortnightly/monthly/quarterly budget updates to PA leads with budget vs. actuals, and ETC data.oSend monthly Customer Satisfaction Scores (CSS) reminders for pending tasks.oPrepare MBR/QBR decks with data extracted from ADO databases.oGenerate ad-hoc data reports as required. Contractors Management (Ad-hoc)oManage contractor onboarding/offboarding using Beeline tools.oTrack Work Order (WO) end dates and liaise with PMs for extensions or offboarding.oAddress contractor queries related to salaries, timesheets, and WBSEs. Maintain session trackers and send communications using Google tools (e.g., Google Newsletter, Google Gamma). Collaborate directly with clients to address session queries and resolve issues promptly. Experience:o8+ years in PMO, project management, or related roles. Technical Proficiency:oProficient in tools such as ADO, Microsoft Project, Google Suite, Beeline, and MS Office (Excel, PowerPoint).oFamiliarity with Google Gamma, Google Newsletter, and related communication platforms. Soft Skills: oExceptional organizational and multitasking abilities.oStrong communication and stakeholder management skills.oAttention to detail and problem-solving mindset. Certifications (Preferred):PMP, PRINCE2, or other project management certifications. Roles and Responsibilities: In this role you are required to do analysis and solving of increasingly complex problems Your day to day interactions are with peers within Accenture You are likely to have some interaction with clients and/or Accenture management You will be given minimal instruction on daily work/tasks and a moderate level of instruction on new assignments Decisions that are made by you impact your own work and may impact the work of others In this role you would be an individual contributor and/or oversee a small work effort and/or team Please note that this role may require you to work in rotational shifts Qualifications Any Graduation

Posted 2 months ago

Apply

2 - 5 years

14 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 3 months ago

Apply

4 - 9 years

6 - 11 Lacs

Chennai, Bengaluru, Gurgaon

Work from Office

Naukri logo

Job Summary: We are seeking a highly skilled Big Data Professional with expertise in Hadoop, Spark, and NoSQL technologies. The ideal candidate will possess strong proficiency in SQL, experience with HIVE, familiarity with SQOOP, and a solid understanding of performance optimization techniques. This role requires collaboration with data scientists, analysts, and business stakeholders to design and maintain robust data pipelines that support our analytics and reporting needs. Software Requirements: Strong proficiency in SQL Experience with HIVE Familiarity with SQOOP Knowledge of SPARK is a plus Experience in performance optimization techniques Proficiency in on-premises solutions, especially with Azure Familiarity with Cloudera and HortonWorks platforms Overall Responsibilities: Design, develop, and maintain data pipelines and ETL processes to support data analytics and reporting needs. Ensure data integrity and quality throughout the data lifecycle. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver efficient solutions. Optimize database performance and troubleshoot issues related to data storage and retrieval. Stay updated on industry trends and emerging technologies to continuously enhance data architecture and operations. Technical Skills: Database Technologies: SQL (Must-have) HIVE (Must-have) SQOOP (Must-have) SPARK (Preferred) Performance Optimization: Techniques for optimizing data processing and query performance. Cloud Technologies: Azure (on-premises focus) Big Data Platforms: Cloudera HortonWorks Experience: 6-14 years of experience in data engineering or a related field. Proven experience with database management and data warehousing. Familiarity with big data technologies and cloud-based solutions is an advantage. Day-to-Day Activities: Develop and maintain data pipelines from various sources to databases. Monitor data flows and troubleshoot issues in real-time. Collaborate with stakeholders to gather requirements and deliver actionable insights. Conduct data quality checks and performance optimizations on existing systems. Participate in team meetings to discuss project progress and share knowledge. Qualifications: Bachelors degree in Computer Science, Information Technology, or a related field. Masters degree is a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Ability to work independently and as part of a team. Adaptability to changing technologies and methodologies. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice Location - Bengaluru,Chennai,Gurugram,Mumbai

Posted 3 months ago

Apply

4 - 6 years

6 - 8 Lacs

Hyderabad

Work from Office

Naukri logo

JPC REQ--data engineer --4+ YEARS--9.2 LPA--TCS---CTOH---kruppiah mg- SHARVIN

Posted 3 months ago

Apply

4 - 9 years

6 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

About The Role : About The Role :: The Intel Foundry Manufacturing and Supply chain FMSC Automation team is looking for a highly motivated Big Data Engineer with strong data engineering skills for data integration of various manufacturing data. You will be responsible for engaging with customers and driving development from ideation to deployment and beyond. This position is a technical role that requires the direct design and development of robust, scalable, performant systems for world-class manufacturing data engineering. Responsibilities include: Create and maintain optimal data architectureAssemble large, complex data sets that meet functional and non-functional business requirementsIdentify, design, and implement internal process improvements:automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sourcesWork with stakeholders including the users, cross functional teams to assist with data-related technical issues and support their data infrastructure needs.Standard process to keep data secure with right access and authorizationFocus on automated testing and robust monitoringThe ideal candidate must exhibit the following behavioral traits:Excellent problem solving and interpersonal communication skillsStrong desire to learn and share knowledge with others.Be inquisitive, innovative, and a team player with a strong focus on quality workmanship.Troubleshooting skills and root cause analysis for performance issuesAbility to lean, adopt and implement new skills to drive innovation and excellence.Ability to work with cross functional teams in dynamic environment Qualifications Minimum Qualifications: A bachelor's with 4+ years of experience in related field Experience building and optimizing big data pipelines Experience with skills pf handling unstructured data Experience with data transformations, structures, metadata, workload management Experience with big data tools:Spark, Kafka, NIFI, etc. Experience with at least programming languages:Python, C#, .NET Experience with relational SQL and NOSQL DBs Experience in leveraging open-source packages Experience in cloud native skills such as Docker, Kubernetes, Rancher etc. Good to have skills: Experience with semiconductor manufacturingExperience of data engineering on cloudExperience in developing AI/ML Solutions Inside this Business Group As the world's largest chip manufacturer, Intel strives to make every facet of semiconductor manufacturing state-of-the-art -- from semiconductor process development and manufacturing, through yield improvement to packaging, final test and optimization, and world class Supply Chain and facilities support. Employees in theTechnology Development and Manufacturing Groupare part of a worldwide network of design, development, manufacturing, and assembly/test facilities, all focused on utilizing the power of Moore's Law to bring smart, connected devices to every person on Earth.

Posted 3 months ago

Apply

7 - 12 years

9 - 14 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Roles & Responsibilities: Assist with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Develop and maintain data pipelines using Databricks Unified Data Analytics Platform. Design and implement data security and access controls using Databricks Unified Data Analytics Platform. Troubleshoot and resolve issues related to data platform components using Databricks Unified Data Analytics Platform. Professional & Technical Skills: Must To Have Skills:Experience with Databricks Unified Data Analytics Platform. Must To Have Skills:Strong understanding of data platform components and architecture. Good To Have Skills:Experience with cloud-based data platforms such as AWS or Azure. Good To Have Skills:Experience with data security and access controls. Good To Have Skills:Experience with data pipeline development and maintenance. Additional Information: The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions.-This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualifications Engineering graduate preferably Computer Science graduate 15 years of full time education

Posted 3 months ago

Apply

5 - 10 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for Java developers with the following skills for Bangalore Location. strong Java developers (read and debug code) and scripting (python or perl programming) experts. good to have skills would be big data pipelines, spark , Hadoop, HBase Candidates should have experience in debugging skills The candidates should have minimum of 5+ yrs experience Nicer to hire strong Java developers (read and debug code) and scripting (python or perl programming) experts, and good to have skills would be big data pipelines, spark , Hadoop, HBase etc., Location : Bangalore Experience : 5-10 yrs Notice Period : 0-60 days

Posted 3 months ago

Apply

3 - 5 years

5 - 7 Lacs

Pune

Work from Office

Naukri logo

Job Job Title Hadoop Developer Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional Requirements: Technology->Big Data - Hadoop->Hadoop Preferred Skills: Technology->Big Data - Hadoop->Hadoop Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Educational Requirements Bachelor of Engineering Service Line Data & Analytics Unit * Location of posting is subject to business requirements

Posted 3 months ago

Apply

Exploring Sqoop Jobs in India

India has seen a rise in demand for professionals skilled in Sqoop, a tool designed for efficiently transferring bulk data between Apache Hadoop and structured datastores such as relational databases. Job seekers with expertise in Sqoop can explore various opportunities in the Indian job market.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Chennai
  5. Mumbai

Average Salary Range

The average salary range for Sqoop professionals in India varies based on experience levels: - Entry-level: Rs. 3-5 lakhs per annum - Mid-level: Rs. 6-10 lakhs per annum - Experienced: Rs. 12-20 lakhs per annum

Career Path

Typically, a career in Sqoop progresses as follows: 1. Junior Developer 2. Sqoop Developer 3. Senior Developer 4. Tech Lead

Related Skills

In addition to expertise in Sqoop, professionals in this field are often expected to have knowledge of: - Apache Hadoop - SQL - Data warehousing concepts - ETL tools

Interview Questions

  • What is Sqoop and why is it used? (basic)
  • Explain the difference between Sqoop import and Sqoop export commands. (medium)
  • How can you perform incremental imports using Sqoop? (medium)
  • What are the limitations of Sqoop? (medium)
  • What is the purpose of the metastore in Sqoop? (advanced)
  • Explain the various options available in the Sqoop import command. (medium)
  • How can you schedule Sqoop jobs in a production environment? (advanced)
  • What is the role of the Sqoop connector in data transfer? (medium)
  • How does Sqoop handle data consistency during imports? (medium)
  • Can you use Sqoop with NoSQL databases? If yes, how? (advanced)
  • What are the different file formats supported by Sqoop for importing and exporting data? (basic)
  • Explain the concept of split-by column in Sqoop. (medium)
  • How can you import data directly into Hive using Sqoop? (medium)
  • What are the security considerations while using Sqoop? (advanced)
  • How can you improve the performance of Sqoop imports? (medium)
  • Explain the syntax of the Sqoop export command. (basic)
  • What is the significance of boundary queries in Sqoop? (medium)
  • How does Sqoop handle data serialization and deserialization? (medium)
  • What are the different authentication mechanisms supported by Sqoop? (advanced)
  • How can you troubleshoot common issues in Sqoop imports? (medium)
  • Explain the concept of direct mode in Sqoop. (medium)
  • What are the best practices for optimizing Sqoop performance? (advanced)
  • How does Sqoop handle data types mapping between Hadoop and relational databases? (medium)
  • What are the differences between Sqoop and Flume? (basic)
  • How can you import data from a mainframe into Hadoop using Sqoop? (advanced)

Closing Remark

As you explore job opportunities in the field of Sqoop in India, make sure to prepare thoroughly and showcase your skills confidently during interviews. Stay updated with the latest trends and advancements in Sqoop to enhance your career prospects. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies