Jobs
Interviews

30 Hadoop Administration Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5 - 8 years

5 - 9 Lacs

Bengaluru

Work from Office

Wipro Limited (NYSEWIT, BSE507685, NSEWIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. About The Role Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ? Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ? Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ? Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ? Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Hadoop. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 months ago

Apply

4 - 9 years

11 - 15 Lacs

Bengaluru

Work from Office

About PhonePe Group: PhonePe is Indias leading digital payments company with 50 crore (500 Million) registered users and 3.7 crore (37 Million) merchants covering over 99% of the postal codes across India. On the back of its leadership in digital payments, PhonePe has expanded into financial services (Insurance, Mutual Funds, Stock Broking, and Lending) as well as adjacent tech-enabled businesses such as Pincode for hyperlocal shopping and Indus App Store which is India's first localized App Store. The PhonePe Group is a portfolio of businesses aligned with the company's vision to offer every Indian an equal opportunity to accelerate their progress by unlocking the flow of money and access to services. Culture At PhonePe, we take extra care to make sure you give your best at work, Everyday! And creating the right environment for you is just one of the things we do. We empower people and trust them to do the right thing. Here, you own your work from start to finish, right from day one. Being enthusiastic about tech is a big part of being at PhonePe. If you like building technology that impacts millions, ideating with some of the best minds in the country and executing on your dreams with purpose and speed, join us! JOB DESCRIPTION Minimum of 1 year of experience in Linux/Unix Administration. Minimum of 2 years of hands on experience with managing infra on public cloud i.e Azure/AWS/GCP Over 4+ years of experience in Hadoop administration. Strong understanding of networking, open-source technologies, and tools. Familiar with best practices and IT operations for maintaining always-up, always-available services. Experience and participation during on-call rotation.Excellent communication skills. Solid expertise in Linux networking, including IP, iptables, and IPsec.Proficient in scripting and coding with languages such as Perl, Golang, or Python. Strong Knowledge of databases like Mysql,Nosql,Sql serverHand on experience with setting up , configuring and Managing Nginx as reverse proxy and load balancing in high traffic environments. Hands-on experience with both private and public cloud environments. Strong troubleshooting skills and operational expertise in areas such as system capacity, bottlenecks, memory, CPU, OS, storage, and networking. Practical experience with the Hadoop stack, including Hdfs,HBase,Hive, Pig, Airflow, YARN, HDFS, Ranger, Kafka, and Druid. Good to have experience with Design,develop and maintain Airflow DAGs and tasks to automate BAU processes,ensuring they are robust,scalable and efficient. Good to have experience with ELK stack administration. Experience in administering Kerberos and LDAP. Familiarity with open-source configuration management and deployment tools like Puppet, Salt, or Ansible.Responsible for the implementation and ongoing administration of Hadoop infrastructure. Experience in capacity planning and performance tuning of Hadoop clusters. Collaborate effectively with infrastructure, network, database, application, and business intelligence teams to ensure high data quality and availability. Develop tools and services to enhance debuggability and supportability.Work closely with security teams to apply Hadoop updates, OS patches, and version upgrades as needed. Troubleshoot complex production issues, identify root causes, and provide mitigation strategies. Work closely with teams to optimize the overall performance of the PhonePe Hadoop ecosystem. Experience with setting up & managing monitoring stack like OpenTsdb,Prometheus,ELK,Grafana,Loki PhonePe Full Time Employee Benefits (Not applicable for Intern or Contract Roles) Insurance Benefits - Medical Insurance, Critical Illness Insurance, Accidental Insurance, Life Insurance Wellness Program - Employee Assistance Program, Onsite Medical Center, Emergency Support System Parental Support - Maternity Benefit, Paternity Benefit Program, Adoption Assistance Program, Day-care Support Program Mobility Benefits - Relocation benefits, Transfer Support Policy, Travel Policy Retirement Benefits - Employee PF Contribution, Flexible PF Contribution, Gratuity, NPS, Leave Encashment Other Benefits - Higher Education Assistance, Car Lease, Salary Advance Policy Working at PhonePe is a rewarding experience! Great people, a work environment that thrives on creativity, the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. Read more about PhonePe .

Posted 2 months ago

Apply

6 - 10 years

18 - 30 Lacs

Hyderabad

Hybrid

Position Big Data or Kubernetes Admin Location Hyderabad Hybrid Mode Fulltime with CASPEX End Client EXPERIAN Note: In both profiles good knowledge in Linux administration and Cloud experience is necessary Kubernetees administrations is not always DevOps, a common Linux or Cloud engineer can learn Kubernetees administration in their day to day work, which is who we actually looking for, not the one only knew of Devops tool without proper Linux and Cloud experience. Linux & AWS & Kubernetees Administrator Must Have skills : Deep understanding of Linux, networking fundamentals and security Experience working with AWS cloud platform and infrastructure services like EC2, S3, VPC, Subnet, ELB, LoadBalnacer, RDS, Route 53 etc.) Experience working with infrastructure as code with Terraform or Ansible tools Experience in building, deploying, and monitoring distributed apps using container systems (Docker) and container orchestration (Kubernetes, EKS) Kubernetes Administration: Cluster Setup and Management, Cluster Configuration and Networking, Upgrades, Monitoring and Logging, Security and Compliance, App deployement etc. Experience in Automation and CI/CD Integration,Capacity Planning, Pod Scheduling, Resource Quotas etc. Experience at OS level upgrades and Patching, including vulnerability remediations Ability to read and understand code (Java / Python / R / Scala) Nice to have skills: Experience in SAS Viya administration Experience managing large Big Data clusters Experience in Big Data tools like Hue, Hive, Spark, Jupyter, SAS and R-Studio Professional coding experience in at least one programming language, preferably Python. Knowledge in analytical libraries like Pandas, Numpy, Scipy, PyTorch etc. Bigdata Administrator & Linux & AWS Must Have skills: Deep understanding of Linux, networking and security fundamentals. Experience working with AWS cloud platform and infrastructure. Experience working with infrastructure as code with Terraform or Ansible tools. Experience managing large BigData clusters in production (at least one of -- Cloudera, Hortonworks, EMR). Excellent knowledge and solid work experience providing observability for BigData platforms using tools like Prometheus, InfluxDB, Dynatrace, Grafana, Splunk etc. Expert knowledge on Hadoop Distributed File System (HDFS) and Hadoop YARN. Decent knowledge of various Hadoop file formats like ORC, Parquet, Avro etc. Deep understanding of Hive (Tez), Hive LLAP, Presto and Spark compute engines. Ability to understand query plans and optimize performance for complex SQL queries on Hive and Spark. Experience supporting Spark with Python (PySpark) and R (SparklyR, SparkR) languages Solid professional coding experience with at least one scripting language - Shell, Python etc. Experience working with Data Analysts, Data Scientists and at least one of these related analytical applications like SAS, R-Studio, JupyterHub, H2O etc. Able to read and understand code (Java, Python, R, Scala), but expertise in at least one scripting languages like Python or Shell. Nice to have skills: Experience with workflow management tools like Airflow, Oozie etc. Knowledge in analytical libraries like Pandas, Numpy, Scipy, PyTorch etc. Implementation history of Packer, Chef, Jenkins or any other similar tooling. Prior working knowledge of Active Directory and Windows OS based VDI platforms like Citrix, AWS Workspaces etc.

Posted 2 months ago

Apply

2 - 7 years

4 - 9 Lacs

Ahmedabad

Work from Office

Hadoop Administrator: Job Description: As an Open Source Hadoop Administrator, role will involve managing and maintaining the Hadoop infrastructure based on open source technologies within an organization. You will be responsible for the installation, configuration, and administration of open source Hadoop clusters and related tools in a production environment. Primary goal will be to ensure the smooth functioning of the Hadoop ecosystem and support the data processing and analytics needs of the organization. Responsibilities: Hadoop Cluster Management: Install, manual configure, and maintain open source Hadoop clusters and related components such as HDFS, YARN, MapReduce, Hive, Pig, Spark, HBase, etc. Monitor cluster health and performance, troubleshoot issues, and optimize cluster resources. Capacity Planning: Collaborate with data architects and infrastructure teams to estimate and plan for future capacity requirements of the open source Hadoop infrastructure. Scale the cluster up or down based on the changing needs of the organization. Security and Authentication: Implement and manage security measures for the open source Hadoop environment, including user authentication, authorization, and data encryption. Ensure compliance with security policies and best practices. Backup and Recovery: Design and implement backup and disaster recovery strategies for the open source Hadoop ecosystem. Regularly perform backups and test recovery procedures to ensure data integrity and availability. Performance Tuning: Monitor and analyze the performance of open source Hadoop clusters and individual components. Identify and resolve performance bottlenecks, optimize configurations, and fine-tune parameters to achieve optimal performance. Monitoring and Logging: Set up monitoring tools and alerts to proactively identify and address issues in the open source Hadoop environment. Monitor resource utilization, system logs, and cluster metrics to ensure reliability and performance. Troubleshooting and Support: Respond to and resolve incidents and service requests related to the open source Hadoop infrastructure. Collaborate with developers, data scientists, and other stakeholders to troubleshoot and resolve issues in a timely manner. Documentation and Reporting: Maintain detailed documentation of open source Hadoop configurations, procedures, and troubleshooting guidelines. Generate regular reports on cluster performance, resource utilization, and capacity utilization. Requirements: Proven experience as a Hadoop Administrator or similar role with open source Hadoop distributions such as Apache Hadoop, Apache HBase, Apache Hive, Apache Spark, etc. Strong knowledge of open source Hadoop ecosystem components and related technologies. Experience with installation, configuration, and administration of open source Hadoop clusters. Proficiency in Linux/Unix operating systems and shell scripting. Familiarity with cluster management and resource allocation frameworks. Understanding of data management and processing concepts in distributed computing environments. Knowledge of security frameworks and best practices in open source Hadoop environments. Experience with performance tuning, troubleshooting, and optimization of open source Hadoop clusters. Strong problem-solving and analytical skills. Hadoop Developer: Job Responsibilities: A Hadoop developer is responsible for designing, developing, and maintaining Hadoop-based solutions for processing and analyzing large datasets. Their job description typically includes: 1. Data Ingestion: Collecting and importing data from various sources into the Hadoop ecosystem using tools like Apache Sqoop, Flume, or streaming APIs. 2. Data Transformation: Preprocessing and transforming raw data into a suitable format for analysis using technologies like Apache Hive, Apache Pig, or Spark. 3. Hadoop Ecosystem: Proficiency in working with components like HDFS (Hadoop Distributed File System), MapReduce, YARN, HBase, and others within the Hadoop ecosystem. 4. Programming: Strong coding skills in languages like Java, Python, or Scala for developing custom MapReduce or Spark applications. 5. Cluster Management: Setting up and maintaining Hadoop clusters, including tasks like configuring, monitoring, and troubleshooting. 6. Data Security: Implementing security measures to protect sensitive data within the Hadoop cluster. 7. Performance Tuning: Optimizing Hadoop jobs and queries for better performance and efficiency. 8. Data Analysis: Collaborating with data scientists and analysts to assist in data analysis, machine learning, and reporting. 9. Documentation: Maintaining clear documentation of Hadoop jobs, configurations, and processes. 10. Collaboration: Working closely with data engineers, administrators, and other stakeholders to ensure data pipelines and workflows are running smoothly. 11. Continuous Learning: Staying updated with the latest developments in the Hadoop ecosystem and big data technologies. 12. Problem Solving: Identifying and resolving issues related to data processing, performance, and scalability. Requirements for this role typically include a strong background in software development, knowledge of big data technologies, and proficiency in using Hadoop-related tools and languages. Additionally, good communication skills and the ability to work in a team are important for successful collaboration on data projects.

Posted 2 months ago

Apply

7 - 12 years

9 - 14 Lacs

Ahmedabad

Work from Office

Project Role : Business Process Architect Project Role Description : Design business processes, including characteristics and key performance indicators (KPIs), to meet process and functional requirements. Work closely with the Application Architect to create the process blueprint and establish business process requirements to drive out application requirements and metrics. Assist in quality management reviews, ensure all business and design requirements are met. Educate stakeholders to ensure a complete understanding of the designs. Must have skills : Data Analytics, Data Warehouse ETL Testing, Big Data Analysis Tool and Techniques, Hadoop Administration Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : Specific undergraduate qualifications ie engineering computer science Summary :Experienced Data Engineer with a strong background in Azure data services and broadcast supply chain ecosystems. Skilled in OTT streaming protocols, cloud technologies, and project management. Roles & Responsibilities: Proven experience as a Data Engineer or in a similar role. Lead and support expert guidance to Principal - Solutions & Integration. Track and report on project progress using internal applications. Transition customer requirements to on-air operations with proper documentation. Scope projects and ensure adherence to budgets and timelines. Generate design and integration documentation. Professional & Technical Skills: Strong proficiency in Azure data services (Azure Data Factory, Azure Databricks, Azure SQL Database). Experience with SQL, Python, and big data tools (Hadoop, Spark, Kafka). Familiarity with data warehousing, ETL techniques, and microservices in a cloud environment. Knowledge of broadcast supply chain ecosystems (BMS, RMS, MAM, Playout, MCR/PCR, NLE, Traffic). Experience with OTT streaming protocols, DRM, and content delivery networks. Working knowledge of cloud technologies (Azure, Docker, Kubernetes, AWS Basics, GCP Basics). Basic understanding of AWS Media Services (Media Connect, Elemental, MediaLive, Media Store, Media 2 Cloud, S3, Glacier). Additional Information: Minimum of 5 years' experience in Data Analytics disciplines. Good presentation and documentation skills. Excellent interpersonal skills. Undergraduate qualifications in engineering or computer science.Networking:Apply basic networking knowledge including TCP/IP, UDP/IP, IGMP, DHCP, DNS, and LAN/WAN technologies to support video delivery systems.Highly Desirable: Experience in defining technical solutions with over 99.999% reliability. Qualifications Specific undergraduate qualifications ie engineering computer science

Posted 2 months ago

Apply
Page 2 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies