Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
14 - 17 Lacs
Pune
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
5.0 - 10.0 years
14 - 17 Lacs
Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
8.0 - 13.0 years
4 - 8 Lacs
Mumbai
Work from Office
4+ years of experience as a Data Engineer or similar role. Proficiency in Python, PySpark, and advanced SQL. Hands-on experience with big data tools and frameworks (e.g., Spark, Hive). Experience with cloud data platforms like AWS, Azure, or GCP is a plus. Solid understanding of data modeling, warehousing, and ETL processes. Strong problem-solving and analytical skills. Good communication and teamwork abilities.Design, build, and maintain data pipelines that collect, process, and store data from various sources. Integrate data from multiple heterogeneous sources such as databases (SQL/NoSQL), APIs, cloud storage, and flat files. Optimize data processing tasks to improve execution efficiency, reduce costs, and minimize processing times, especially when working with large-scale datasets in Spark. Design and implement data warehousing solutions that centralize data from multiple sources for analysis.
Posted 1 month ago
8.0 - 13.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Experience in SQL and understanding of ETL best practices Should have good hands on in ETL/Big Data development Extensive hands on experience in Scala Should have experience in Spark/Yarn, troubleshooting Spark, Linux, Python Setting up a Hadoop cluster, Backup, recovery, and maintenance.
Posted 1 month ago
3.0 - 7.0 years
10 - 14 Lacs
Pune
Work from Office
Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices, Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)- Spark Good to have Python. Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands,Experience in Concurrent design and multi-threading Preferred technical and professional experience None
Posted 1 month ago
3.0 - 7.0 years
10 - 14 Lacs
Chennai
Work from Office
As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Spring Boot, Java2/EE, Microsservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred technical and professional experience None
Posted 1 month ago
4.0 - 5.0 years
9 - 19 Lacs
Hyderabad
Work from Office
Hi All , We have immediate openings for Below Requirement Role : Hadoop Administration Skill : Hadoop Administrator(with EMR, Spark, Kafka, HBase, OpenSearch, Snowflake, Neo4j, AWS) Experience : 4 to 9yrs Work location : Hyderabad Interview Mode : 1sr round virtual & 2nd round F2F Notice Period : 15 to immediate joiners only Interested candidates can share your cv to Mail : sravani.vommi@sonata-software.com Contact : 7075751998 JD FOR Hadoop Admin: Hadoop Administrator(with EMR, Spark, Kafka, HBase, OpenSearch, Snowflake, Neo4j, AWS) Job Summary: We are seeking a highly skilled Hadoop Administrator with hands-on experience managing distributed data platforms such as Hadoop EMR, Spark, Kafka, HBase, OpenSearch, Snowflake, and Neo4j. Key Responsibilities: Cluster Management: Administer, manage, and maintain Hadoop EMR clusters, ensuring optimal performance, high availability, and resource utilization. Handle the provisioning, configuration, and scaling of Hadoop clusters, with a focus on EMR, ensuring seamless integration with other ecosystem tools (e.g., Spark, Kafka, HBase). Oversee HBase configurations, performance tuning, and integration within the Hadoop ecosystem. Manage OpenSearch(formerly known as Elasticsearch) for log analytics and large-scale search applications. Data Integration & Processing: Oversee the performance and optimization of Apache Spark workloads across distributed data environments. Design and manage efficient data pipelines between Snowflake, Kafka, and the Hadoop ecosystem, ensuring seamless data movement and transformation. Implement data storage solutions in Snowflake and manage seamless data transfers to/from Hadoop(EMR) and other environments. Cloud & AWS Services: Work closely with AWS services such as EC2, S3,ECS, Lambda, IAM, RDS, and CloudWatch to build scalable, cost-efficient solutions for data management and processing. manage AWS EMR clusters, ensuring they are optimized for big data workloads and integrated with other AWS services. - Security & Compliance: Manage and configure Kerberos authentication and access control mechanisms within the Hadoop ecosystem (HDFS, YARN, Spark) to ensure data security. Implement encryption and secure data transfer policies within Hadoop clusters, Kafka, HBase, and OpenSearch to meet compliance and regulatory requirements. Manage user roles and permissions for access to Snowflake and ensure seamless integration of security policies across platforms. Monitoring & Troubleshooting: Set up and manage monitoring solutions to ensure the health of the Hadoop ecosystem and related components. Actively monitor and troubleshoot issues with Spark, Kafka, HBase, OpenSearch, and other distributed systems. Provide proactive support to address performance issues, bottlenecks, and failures. Automation & Optimization: Automate the deployment, scaling, and management of Hadoop and other big data systems using scripting languages (Bash, Python) . Optimize the configurations and performance of EMR, Spark, Kafka, HBase, OpenSearch. Develop scripts and utilities for backup, job monitoring, and performance tuning.
Posted 1 month ago
5.0 - 10.0 years
7 - 17 Lacs
Hyderabad
Work from Office
Immediate Openings on Big data engineer/Developer _ Pan India_Contract Experience 5+ Years Skills Big data engineer/Developer Location Pan India Notice Period Immediate . Employment Type Contract Working Mode Hybrid Big data engineer/Developer Spark-Scala HQL, Hive Control-m Jinkins Git Technical analysis and up to some extent business analysis (knowledge about banking products, credit cards and its transactions)
Posted 1 month ago
5.0 - 8.0 years
4 - 8 Lacs
Telangana
Work from Office
Education Bachelors degree in Computer Science, Engineering, or a related field. A Masters degree is preferred. Experience Minimum of 4+ years of experience in data engineering or a similar role. Strong programming skills in Python programming and advance SQL. strong experience in NumPy, Pnadas, Data frames Strong analytical and problem-solving skills. Excellent communication and collaboration abilities.
Posted 1 month ago
7.0 - 12.0 years
4 - 8 Lacs
Pune
Hybrid
Should be capable of developing/configuring data pipelines in a variety of platforms and technologies Possess the following technical skills SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools) Can demonstrate strong experience in writing complex SQL queries to perform data analysis on Databases SQL server, Oracle, HIVE etc. Experience with GCP (particularly Airflow, Dataproc, Big Query) is an advantage Have experience with creating solutions which power AI/ML models and generative AI Ability to work independently on specialized assignments within the context of project deliverables Take ownership of providing solutions and tools that iteratively increase engineering efficiencies Be capable of creating designs which help embed standard processes, systems and operational models into the BAU approach for end-to-end execution of data pipelines Be able to demonstrate problem solving and analytical abilities including the ability to critically evaluate information gathered from multiple sources, reconcile conflicts, decompose high-level information into details and apply sound business and technical domain knowledge Communicate openly and honestly using sophisticated oral, written and visual communication and presentation skills - the ability to communicate efficiently at a global level is paramount Ability to deliver materials of the highest quality to management against tight deadlines Ability to work effectively under pressure with competing and rapidly changing priorities RegardsIT RecruiterIDESLABS (P) LTDsrilakshmi.k@ideslabs.com|
Posted 1 month ago
5.0 - 9.0 years
6 - 9 Lacs
Bengaluru
Work from Office
Looking for senior pyspark developer with 6+ years of hands on experienceBuild and manage large scale data solutions using tools like Pyspark, Hadoop, Hive, Python & SQLCreate workflows to process data using IBM TWSAble to use pyspark to create different reports and handle large datasetsUse HQL/SQL/Hive for ad-hoc query data and generate reports, and store data in HDFS Able to deploy code using Bitbucket, Pycharm and Teamcity.Can manage folks, able to communicate with several teams and can explain problem/solutions to business team in non-tech manner -Primary Skill Pyspark-Hadoop-Spark - One to Three Years,Developer / Software Engineer
Posted 1 month ago
6.0 - 8.0 years
8 - 10 Lacs
Mumbai
Work from Office
Design and implement data architecture and models for Big Data solutions using MapR and Hadoop ecosystems. You will optimize data storage, ensure data scalability, and manage complex data workflows. Expertise in Big Data, Hadoop, and MapR architecture is required for this position.
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Chennai
Work from Office
Design and implement Big Data solutions using Hadoop and MapR ecosystem. You will work with data processing frameworks like Hive, Pig, and MapReduce to manage and analyze large data sets. Expertise in Hadoop and MapR is required.
Posted 1 month ago
5.0 - 8.0 years
7 - 10 Lacs
Chennai
Work from Office
Design, implement, and optimize Big Data solutions using Hadoop technologies. You will work on data ingestion, processing, and storage, ensuring efficient data pipelines. Strong expertise in Hadoop, HDFS, and MapReduce is essential for this role.
Posted 1 month ago
4.0 - 6.0 years
6 - 8 Lacs
Mumbai
Work from Office
Develops data processing solutions using Scala and PySpark.
Posted 1 month ago
6.0 - 8.0 years
8 - 10 Lacs
Mumbai
Work from Office
Design and implement big data solutions using Hadoop ecosystem tools like MapR. Develop data models, optimize data storage, and ensure seamless integration of big data technologies into enterprise systems.
Posted 1 month ago
14.0 - 22.0 years
45 - 75 Lacs
Pune
Remote
Architecture design, total solution design from requirements analysis, design and engineering for data ingestion, pipeline, data preparation & orchestration, applying the right ML algorithms on the data stream and predictions. Responsibilities: Defining, designing and delivering ML architecture patterns operable in native and hybrid cloud architectures. Research, analyze, recommend and select technical approaches to address challenging development and data integration problems related to ML Model training and deployment in Enterprise Applications. Perform research activities to identify emerging technologies and trends that may affect the Data Science/ ML life-cycle management in enterprise application portfolio. Implementing the solution using the AI orchestration Requirements: Hands-on programming and architecture capabilities in Python, Java, Minimum 6+ years of Experience in Enterprise applications development (Java, . Net) Experience in implementing and deploying Experience in building Data Pipeline, Data cleaning, Feature Engineering, Feature Store Experience in Data Platforms like Databricks, Snowflake, AWS/Azure/GCP Cloud and Data services Machine Learning solutions (using various models, such as Linear/Logistic Regression, Support Vector Machines, (Deep) Neural Networks, Hidden Markov Models, Conditional Random Fields, Topic Modeling, Game Theory, Mechanism Design, etc. ) Strong hands-on experience with statistical packages and ML libraries (e. g. R, Python scikit learn, Spark MLlib, etc. ) Experience in effective data exploration and visualization (e. g. Excel, Power BI, Tableau, Qlik, etc. ) Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc. ) Hands on experience in RDBMS, NoSQL, big data stores like: Elastic, Cassandra, Hbase, Hive, HDFS Work experience as Solution Architect/Software Architect/Technical Lead roles Experience with open-source software. Excellent problem-solving skills and ability to break down complexity. Ability to see multiple solutions to problems and choose the right one for the situation. Excellent written and oral communication skills. Demonstrated technical expertise around architecting solutions around AI, ML, deep learning and related technologies. Developing AI/ML models in real-world environments and integrating AI/ML using Cloud native or hybrid technologies into large-scale enterprise applications. In-depth experience in AI/ML and Data analytics services offered on Amazon Web Services and/or Microsoft Azure cloud solution and their interdependencies. Specializes in at least one of the AI/ML stack (Frameworks and tools like MxNET and Tensorflow, ML platform such as Amazon SageMaker for data scientists, API-driven AI Services like Amazon Lex, Amazon Polly, Amazon Transcribe, Amazon Comprehend, and Amazon Rekognition to quickly add intelligence to applications with a simple API call). Demonstrated experience developing best practices and recommendations around tools/technologies for ML life-cycle capabilities such as Data collection, Data preparation, Feature Engineering, Model Management, MLOps, Model Deployment approaches and Model monitoring and tuning. Back end: LLM APIs and hosting, both proprietary and open-source solutions, cloud providers, ML infrastructure Orchestration: Workflow management such as LangChain, Llamalndex, HuggingFace, OLLAMA Data Management : LLM cache Monitoring: LLM Ops tool Tools & Techniques: prompt engineering, embedding models, vector DB, validation frameworks, annotation tools, transfer learnings and others Pipelines: Gen AI pipelines and implementation on cloud platforms (preference: Azure data bricks, Docker Container, Nginx, Jenkins)
Posted 1 month ago
6.0 - 11.0 years
10 - 14 Lacs
Hyderabad, Pune, Chennai
Work from Office
Job type: contract to hire 10+ years of software development experience building large scale distributed data processing systems/application, Data Engineering or large scale internet systems. Experience of at least 4 years in Developing/ Leading Big Data solution at enterprise scale with at least one end to end implementation Strong experience in programming languages Java/J2EE/Scala. Good experience in Spark/Hadoop/HDFS Architecture, YARN, Confluent Kafka , Hbase, Hive, Impala and NoSQL database. Experience with Batch Processing and AutoSys Job Scheduling and Monitoring Performance analysis, troubleshooting and resolution (this includes familiarity and investigation of Cloudera/Hadoop logs) Work with Cloudera on open issues that would result in cluster configuration changes and then implement as needed Strong experience with databases such as SQL,Hive, Elasticsearch, HBase, etc Knowledge of Hadoop Security, Data Management and Governance Primary Skills: Java/Scala, ETL, Spark, Hadoop, Hive, Impala, Sqoop, HBase, Confluent Kafka, Oracle, Linux, Git, Jenkins CI/CD
Posted 1 month ago
4.0 - 9.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Immediate job opening for # Python+SQL_C2H_Pan India. #Skill:Python+SQL #Job description: Strong programming skills in Python programming and advance SQL. strong experience in NumPy, Pandas, Data frames Strong analytical and problem-solving skills. Excellent communication and collaboration abilities.
Posted 1 month ago
6.0 - 11.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Experience in Cloud platform, e.g., AWS, GCP, Azure, etc. Experience in distributed technology tools, viz. SQL, Spark, Python, PySpark, Scala Performance Turing Optimize SQL, PySpark for performance Airflow workflow scheduling tool for creating data pipelines GitHub source control tool & experience with creating/ configuring Jenkins pipeline Experience in EMR/ EC2, Databricks etc. DWH tools incl. SQL database, Presto, and Snowflake Streaming, Serverless Architecture
Posted 1 month ago
5.0 - 10.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Big data (Hadoop and Spark) skills. Programming language: Python, Scala Job requirement This position is for a mid-level data engineer with development experience who will focus on creating new capabilities in the Risk space while maturing our code base and development processes. Qualifications: 3 or more years of work experience with a bachelors degree or more than 2 years of work experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) Experience in creating data driven business solutions and solving data problems using a wide variety of technologies such as Hadoop, Hive, Spark, MongoDB, NoSQL, as well as traditional data technologies like RDBMS, MySQL a plus Ability to program in one or more scripting languages such as Perl or Python and one or more programming languages such as Java or Scala Experience with data visualization and business intelligence tools like Tableau is a plus Experience with or knowledge of Continuous Integration & Development and automation tools such as Jenkins, Artifactory, Git etc. Experience with or knowledge of Agile and Test-Driven Development methodology Strong analytical skills with excellent problem-solving ability
Posted 1 month ago
5.0 - 8.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Must have minimum experience of 5 years working on developing data pipelines Must have worked on data engineering using Python for at least 5 years Must have experience working on big data technologies for at least 5 years Must have experience in exploring and proposing solutions and come up with architecture and technical design Must have experience working on Docker Must have experience working on AWS Must have working experience on DevOps pipelines Must be a excellent technical leader who can take responsibility of a team and own it Must have experince practicing agile methods - scrum and kanban Must have experience interacting with US customers on day to day basis
Posted 1 month ago
6.0 - 8.0 years
8 - 11 Lacs
Hyderabad
Work from Office
Immediate Job Openings on #Big Data Engineer _ Pan India_ Contract Experience: 6 +Years Skill:Big Data Engineer Location: Pan India Notice Period:Immediate. Employment Type: Contract Pyspark Azure Data Bricks Experience on workflows Unity catalog Managed / external data with delta tables.
Posted 1 month ago
3.0 - 5.0 years
14 - 19 Lacs
Mumbai, Pune
Work from Office
Company: Marsh McLennan Agency Description: Marsh McLennan is seeking candidates for the following position based in the Pune office. Senior Engineer/Principal Engineer What can you expect We are seeking a skilled Data Engineer with 3 to 5 years of hands-on experience in building and optimizing data pipelines and architectures. The ideal candidate will have expertise in Spark, AWS Glue, AWS S3, Python, complex SQL, and AWS EMR. What is in it for you Holidays (As Per the location) Medical & Insurance benefits (As Per the location) Shared Transport (Provided the address falls in service zone) Hybrid way of working Diversify your experience and learn new skills Opportunity to work with stakeholders globally to learn and grow We will count on you to: Design and implement scalable data solutions that support our data-driven decision-making processes. What you need to have: SQL and RDBMS knowledge - 5/5. Postgres. Should have extensive hands-on Database systems carrying tables, schema, views, materialized views. AWS Knowledge. Core and Data engineering services. Glue/ Lambda/ EMR/ DMS/ S3 - services in focus. ETL data:dge :- Any ETL tool preferably Informatica. Data warehousing. Big data:- Hadoop - Concepts. Spark - 3/5 Hive - 5/5 Python/ Java. Interpersonal skills:- Excellent communication skills and Team lead capabilities. Understanding of data systems well in big organizations setup. Passion deep diving and working with data and delivering value out of it. What makes you stand out Databricks knowledge. Any Reporting tool experience. Preferred MicroStrategy. Marsh McLennan (NYSEMMC) is the worlds leading professional services firm in the areas ofrisk, strategy and people. The Companys more than 85,000 colleagues advise clients in over 130 countries.With annual revenue of $23 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses.Marshprovides data-driven risk advisory services and insurance solutions to commercial and consumer clients.Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clients grow profitably and pursue emerging opportunities. Mercer delivers advice and technology-driven solutions that help organizations redefine the world of work, reshape retirement and investment outcomes, and unlock health and well being for a changing workforce. Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com , or follow us onLinkedIn andX . Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person. Marsh McLennan (NYSEMMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person.
Posted 1 month ago
6.0 - 8.0 years
25 - 30 Lacs
Bengaluru
Work from Office
6+ years of experience in information technology, Minimum of 3-5 years of experience in managing and administering Hadoop/Cloudera environments. Cloudera CDP (Cloudera Data Platform), Cloudera Manager, and related tools. Hadoop ecosystem components (HDFS, YARN, Hive, HBase, Spark, Impala, etc.). Linux system administration with experience with scripting languages (Python, Bash, etc.) and configuration management tools (Ansible, Puppet, etc.) Tools like Kerberos, Ranger, Sentry), Docker, Kubernetes, Jenkins Cloudera Certified Administrator for Apache Hadoop (CCAH) or similar certification. Cluster Management, Optimization, Best practice implementation, collaboration and support.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough