Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2 - 7 years
4 - 9 Lacs
Ahmedabad
Work from Office
Hadoop Administrator: Job Description: As an Open Source Hadoop Administrator, role will involve managing and maintaining the Hadoop infrastructure based on open source technologies within an organization. You will be responsible for the installation, configuration, and administration of open source Hadoop clusters and related tools in a production environment. Primary goal will be to ensure the smooth functioning of the Hadoop ecosystem and support the data processing and analytics needs of the organization. Responsibilities: Hadoop Cluster Management: Install, manual configure, and maintain open source Hadoop clusters and related components such as HDFS, YARN, MapReduce, Hive, Pig, Spark, HBase, etc. Monitor cluster health and performance, troubleshoot issues, and optimize cluster resources. Capacity Planning: Collaborate with data architects and infrastructure teams to estimate and plan for future capacity requirements of the open source Hadoop infrastructure. Scale the cluster up or down based on the changing needs of the organization. Security and Authentication: Implement and manage security measures for the open source Hadoop environment, including user authentication, authorization, and data encryption. Ensure compliance with security policies and best practices. Backup and Recovery: Design and implement backup and disaster recovery strategies for the open source Hadoop ecosystem. Regularly perform backups and test recovery procedures to ensure data integrity and availability. Performance Tuning: Monitor and analyze the performance of open source Hadoop clusters and individual components. Identify and resolve performance bottlenecks, optimize configurations, and fine-tune parameters to achieve optimal performance. Monitoring and Logging: Set up monitoring tools and alerts to proactively identify and address issues in the open source Hadoop environment. Monitor resource utilization, system logs, and cluster metrics to ensure reliability and performance. Troubleshooting and Support: Respond to and resolve incidents and service requests related to the open source Hadoop infrastructure. Collaborate with developers, data scientists, and other stakeholders to troubleshoot and resolve issues in a timely manner. Documentation and Reporting: Maintain detailed documentation of open source Hadoop configurations, procedures, and troubleshooting guidelines. Generate regular reports on cluster performance, resource utilization, and capacity utilization. Requirements: Proven experience as a Hadoop Administrator or similar role with open source Hadoop distributions such as Apache Hadoop, Apache HBase, Apache Hive, Apache Spark, etc. Strong knowledge of open source Hadoop ecosystem components and related technologies. Experience with installation, configuration, and administration of open source Hadoop clusters. Proficiency in Linux/Unix operating systems and shell scripting. Familiarity with cluster management and resource allocation frameworks. Understanding of data management and processing concepts in distributed computing environments. Knowledge of security frameworks and best practices in open source Hadoop environments. Experience with performance tuning, troubleshooting, and optimization of open source Hadoop clusters. Strong problem-solving and analytical skills. Hadoop Developer: Job Responsibilities: A Hadoop developer is responsible for designing, developing, and maintaining Hadoop-based solutions for processing and analyzing large datasets. Their job description typically includes: 1. Data Ingestion: Collecting and importing data from various sources into the Hadoop ecosystem using tools like Apache Sqoop, Flume, or streaming APIs. 2. Data Transformation: Preprocessing and transforming raw data into a suitable format for analysis using technologies like Apache Hive, Apache Pig, or Spark. 3. Hadoop Ecosystem: Proficiency in working with components like HDFS (Hadoop Distributed File System), MapReduce, YARN, HBase, and others within the Hadoop ecosystem. 4. Programming: Strong coding skills in languages like Java, Python, or Scala for developing custom MapReduce or Spark applications. 5. Cluster Management: Setting up and maintaining Hadoop clusters, including tasks like configuring, monitoring, and troubleshooting. 6. Data Security: Implementing security measures to protect sensitive data within the Hadoop cluster. 7. Performance Tuning: Optimizing Hadoop jobs and queries for better performance and efficiency. 8. Data Analysis: Collaborating with data scientists and analysts to assist in data analysis, machine learning, and reporting. 9. Documentation: Maintaining clear documentation of Hadoop jobs, configurations, and processes. 10. Collaboration: Working closely with data engineers, administrators, and other stakeholders to ensure data pipelines and workflows are running smoothly. 11. Continuous Learning: Staying updated with the latest developments in the Hadoop ecosystem and big data technologies. 12. Problem Solving: Identifying and resolving issues related to data processing, performance, and scalability. Requirements for this role typically include a strong background in software development, knowledge of big data technologies, and proficiency in using Hadoop-related tools and languages. Additionally, good communication skills and the ability to work in a team are important for successful collaboration on data projects.
Posted 2 months ago
7 - 12 years
5 - 9 Lacs
Noida
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : SAP BW on HANA Data Modeling & Development Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with teams to develop solutions and ensure seamless application functionality. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead and mentor junior professionals Conduct regular knowledge sharing sessions Stay updated on industry trends and best practices Professional & Technical Skills: Must To Have Skills: Proficiency in SAP BW on HANA Data Modeling & Development Strong understanding of data modeling concepts Experience in developing complex data models Knowledge of SAP BW on HANA architecture Hands-on experience in optimizing data models for performance Additional Information: The candidate should have a minimum of 7.5 years of experience in SAP BW on HANA Data Modeling & Development This position is based at our Noida office A 15 years full-time education is required Qualification 15 years full time education
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough