Jobs
Interviews

32 Hadoop Ecosystem Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

8 - 9 Lacs

chennai

Work from Office

Job Title: Hadoop Administrator Location: Chennai, India Experience: 5 yrs of experience in IT, with At least 2+ years of experience with cloud and system administration. At least 3 years of experience with and strong understanding of big data technologies in Hadoop ecosystem Hive, HDFS, Map/Reduce, Flume, Pig, Cloudera, HBase Sqoop, Spark etc. Company: Smartavya Analytica Private limited is a niche Data and AI company. Based in Pune, we are pioneers in data-driven innovation, transforming enterprise data into strategic insights. Established in 2017, our team has experience in handling large datasets up to 20 PBs in a single implementation, delivering many successful data and AI projects across major industries, including retail, finance, telecom, manufacturing, insurance, and capital markets. We are leaders in Big Data, Cloud and Analytics projects with super specialization in very large Data Platforms. https://smart-analytica.com SMARTAVYA ANALYTICA Smartavya Analytica is a leader in Big Data, Data Warehouse and Data Lake Solutions, Data Migration Services and Machine Learning/Data Science projects on all possible flavours namely on-prem, cloud and migration both ways across platforms such as traditional DWH/DL platforms, Big Data Solutions on Hadoop, Public Cloud and Private Cloud.smart-analytica.com Empowering Your Digital Transformation with Data Modernization and AI Job Overview: Smartavya Analytica Private Limited is seeking an experienced Hadoop Administrator to manage and support our Hadoop ecosystem. The ideal candidate will have strong expertise in Hadoop cluster administration, excellent troubleshooting skills, and a proven track record of maintaining and optimizing Hadoop environments. Key Responsibilities: Install, configure, and manage Hadoop clusters, including HDFS, YARN, Hive, HBase, and other ecosystem components. Monitor and manage Hadoop cluster performance, capacity, and security. Perform routine maintenance tasks such as upgrades, patching, and backups. Implement and maintain data ingestion processes using tools like Sqoop, Flume, and Kafka. Ensure high availability and disaster recovery of Hadoop clusters. Collaborate with development teams to understand requirements and provide appropriate Hadoop solutions. Troubleshoot and resolve issues related to the Hadoop ecosystem. Maintain documentation of Hadoop environment configurations, processes, and procedures. Requirement: Experience in Installing, configuring and tuning Hadoop distributions. Hands on experience in Cloudera. Understanding of Hadoop design principals and factors that affect distributed system performance, including hardware and network considerations. Provide Infrastructure Recommendations, Capacity Planning, work load management. Develop utilities to monitor cluster better Ganglia, Nagios etc. Manage large clusters with huge volumes of data Perform Cluster maintenance tasks Create and removal of nodes, cluster monitoring and troubleshooting Manage and review Hadoop log files Install and implement security for Hadoop clusters Install Hadoop Updates, patches and version upgrades. Automate the same through scripts Point of Contact for Vendor escalation. Work with Hortonworks in resolving issues Should have Conceptual/working knowledge of basic data management concepts like ETL, Ref/Master data, Data quality, RDBMS Working knowledge of any scripting language like Shell, Python, Perl Should have experience in Orchestration & Deployment tools. Academic Qualification: BE / B.Tech in Computer Science or equivalent along with hands-on experience in dealing with large data sets and distributed computing in data warehousing and business intelligence systems using Hadoop.

Posted 5 days ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

Role Overview: As an Applications Development Senior Programmer Analyst at our company, you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. Key Responsibilities: - Conduct tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establish and implement new or revised applications systems and programs to meet specific business needs or user areas - Monitor and control all phases of development process and analysis, design, construction, testing, and implementation as well as provide user and operational support on applications to business users - Utilize in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgement - Recommend and develop security measures in post-implementation analysis of business usage to ensure successful system design and functionality - Consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and install and assist customer exposure systems - Ensure essential procedures are followed and help define operating standards and processes - Serve as an advisor or coach to new or lower-level analysts - Have the ability to operate with a limited level of direct supervision - Exercise independence of judgement and autonomy - Act as Subject Matter Expert (SME) to senior stakeholders and/or other team members - Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets by driving compliance with applicable laws, rules, and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing, and reporting control issues with transparency Qualifications: - 8+ years of Development experience with expertise in: - Hadoop Ecosystem (HDFS, Impala, HIVE, HBASE, etc.) - Java Server-side development working in Low latency applications - Scala programming skills - Spark expertise (micro batching, EOD/real-time) - Data Analyst preferably using SQL - Financial background preferable - Python - Linux - A history of delivering against agreed objectives - Ability to multi-task and work under pressure - Ability to pick up new concepts and apply knowledge - Demonstrated problem-solving skills - Enthusiastic and proactive approach with willingness to learn - Excellent analytical and process-based skills, i.e., process flow diagrams, business modeling, and functional design Education: - Bachelor's degree/University degree or equivalent experience (Note: This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.),

Posted 5 days ago

Apply

1.0 - 6.0 years

0 Lacs

karnataka

On-site

Role Overview: At Goldman Sachs, you will be part of the Engineering team that focuses on solving challenging engineering problems for clients and building massively scalable software and systems. You will have the opportunity to connect people and capital with ideas, guard against cyber threats, and leverage machine learning to turn data into action. As an Engineer at Goldman Sachs, you will play a crucial role in transforming finance and exploring various opportunities in a dynamic environment that requires innovative strategic thinking. Key Responsibilities: - Curate, design, and catalogue high-quality data models to ensure accessibility and reliability of data - Build highly scalable data processing frameworks for a wide range of datasets and applications - Provide data-driven insights critical to business processes by exposing data in a scalable manner - Understand existing and potential data sets from both an engineering and business context - Deploy modern data management tools to curate important data sets, models, and processes while identifying areas for automation and efficiency - Evaluate, select, and acquire new internal and external data sets contributing to business decision-making - Engineer streaming data processing pipelines and drive adoption of Cloud technology for data processing and warehousing - Engage with data consumers and producers to design appropriate models to meet all needs Qualifications Required: - 1-6 years of relevant work experience in a team-focused environment - Bachelor's degree (Master's preferred) in a computational field like Computer Science, Applied Mathematics, Engineering, or related quantitative discipline - Working knowledge of multiple programming languages such as Python, Java, C++, C#, etc. - Extensive experience applying domain-driven design to build complex business applications - Deep understanding of data multidimensionality, curation, and quality, including security, performance, latency, and correctness - Proficiency in relational and columnar SQL databases and database design - General knowledge of business processes, data flows, and quantitative models - Excellent communication skills and ability to work with subject matter experts - Strong analytical and problem-solving skills with a sense of ownership and urgency - Ability to collaborate across global teams and simplify complex ideas for communication Company Details: At Goldman Sachs, they are committed to fostering diversity and inclusion in the workplace and beyond. They offer various opportunities for professional and personal growth, including training, development, networks, benefits, wellness programs, and more. Goldman Sachs is dedicated to providing reasonable accommodations for candidates with special needs or disabilities during the recruiting process.,

Posted 5 days ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

hyderabad

Remote

Core Requirements 8+ years with Linux, Bash, Python, SQL. 4+ years with Spark, Hadoop ecosystem, and team leadership. Strong AWS experience: EMR, Glue, Athena, Redshift. Proven expertise in designing data flows and integration APIs. Passion for solving complex problems using modern tech. Preferred Skills Degree in CS or related field. Python, C++, or similar programming language. Experience with petabyte-scale data, data catalogs (Hive, Glue), and pipeline tools (Airflow, dbt). Familiarity with AWS and/or GCP.

Posted 6 days ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Big Data Engineer at our company, you will be an integral part of our data engineering team. Your primary responsibility will be to design, develop, and optimize scalable data pipelines and big data solutions. To excel in this role, you should have hands-on experience with the Hadoop ecosystem, Apache Spark, and programming languages such as Python (PySpark), Scala, and Java. Your expertise in these technologies will enable you to support analytics and business intelligence initiatives effectively. By leveraging your skills, you will contribute to the success of our data-driven projects and help drive insights that can positively impact our business. If you are a highly skilled and motivated individual with a passion for working with big data, we encourage you to apply for this exciting opportunity to be part of our dynamic team.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

This position falls under the ICG TTS Operations Technology (OpsTech) Group, focusing on assisting in the implementation of a next-generation Digital Automation Platform and Imaging Workflow Technologies. The ideal Candidate should have relevant experience in managing development teams within the distributed systems Eco-System and must exhibit strong teamwork skills. The candidate is expected to possess superior technical knowledge of current programming languages, technologies, and leading-edge development tools. The primary objective of this role is to contribute to applications, systems analysis, and programming activities. As a Lead Spark Scala Engineer, the candidate should have hands-on knowledge of SPARK, Py-Spark, Scala, Java, and RDBMS like MS-SQL/Oracle. Familiarity with CI/CD tools such as LightSpeed and uDeploy is also required. Key Responsibilities include: - Development & Optimization: Develop, test, and deploy production-grade Spark applications in Scala, ensuring optimal performance, scalability, and resource utilization. - Technical Leadership: Provide guidance to a team of data engineers, promoting a culture of technical excellence and collaboration. - Code Review & Best Practices: Conduct thorough code reviews, establish coding standards, and enforce best practices for Spark Scala development, data governance, and data quality. - Performance Tuning: Identify and resolve performance bottlenecks in Spark applications through advanced tuning techniques. - Deep Spark Expertise: Profound understanding of Spark's architecture, execution model, and optimization techniques. - Scala Proficiency: Expert-level proficiency in Scala programming, including functional programming paradigms and object-oriented design. - Big Data Ecosystem: Strong hands-on experience with the broader Hadoop ecosystem and related big data technologies. - Database Knowledge: Solid understanding of relational databases and NoSQL databases. - Communication: Excellent communication, interpersonal, and leadership skills to convey complex technical concepts effectively. - Problem-Solving: Exceptional analytical and problem-solving abilities with meticulous attention to detail. Education Requirement: - Bachelors degree/University degree or equivalent experience This position is a full-time role falling under the Technology Job Family Group and Applications Development Job Family. The most relevant skills include those mentioned in the requirements section, while additional complementary skills can be found above or by contacting the recruiter.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As an expert in Google Cloud Platform (GCP) services, Docker, Kubernetes, and Hadoop ecosystem, you will play a crucial role in the infrastructure development of _VOIS India. Your responsibilities will include leveraging GCP services such as Data Fusion, Data Flow, Pub/Sub, Kubernetes, and Cloud Storage to enhance the efficiency and quality of our operations. You will be expected to have hands-on experience with Google Cloud Build, Terraform, Ansible, and Infrastructure as code (IaaC) to ensure a secure and scalable multi-tenancy infrastructure. Your expertise in Secure by Design concept and GCP IAM will be essential in maintaining a robust and secure architecture. In addition, your strong DevOps experience with Java, Scala, Python, Windows, and Node JS based applications will be instrumental in driving innovation and automation within the organization. Your familiarity with Unix-based systems and bash programming will further contribute to the seamless integration of various technologies. Working in an Agile environment, you will collaborate with cross-functional teams to deliver results efficiently and effectively. Your ability to adapt to changing requirements and prioritize tasks will be key to your success in this role. _VOIS India is committed to providing equal opportunities for all employees, fostering a diverse and inclusive workplace culture. Our dedication to employee satisfaction has been recognized through various accolades, including being certified as a Great Place to Work in India for four consecutive years. By joining _VOIS India, you become part of a supportive and diverse family that values individual contributions and celebrates a variety of cultures, backgrounds, and perspectives. If you are passionate about technology, innovation, and making a positive impact, we encourage you to apply and become a part of our dynamic team. We look forward to welcoming you into our community and working together towards achieving our goals. Apply now and take the first step towards an exciting career with us!,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

Oracle Health Data Intelligence is seeking a Senior Software Engineer to join the HealtheCare Care Coordination team. In this role, you will write and configure code based on technical specifications, following the Agile Methodology for timely delivery. Your responsibilities will include enhancing existing software, participating in code reviews, collaborating on technical designs, debugging issues, and identifying process improvements. Additionally, you will provide software release support, mentor new associates, and contribute to training programs. To excel in this role, you should have experience with distributed systems, building highly available services, service-oriented design patterns, and modern infrastructure components. Proficiency in production operations, cloud technologies, and communication of technical ideas is essential. A Bachelor's or Master's degree in Computer Science Engineering or a related field is required, along with programming experience in Java and Ruby. Familiarity with distributed processing frameworks, RESTful services, Big Data processing, and DevOps technologies is also necessary. Key Responsibilities: - Bachelor's or Master's degree in Computer Science Engineering or related field. - Programming experience with Java and Ruby. - Experience with distributed processing frameworks like Hadoop, MapReduce, Spark, etc. - Development of RESTful services. - Knowledge of Big Data Processing and Relational Databases. - Familiarity with DevOps technologies such as Jenkins, Kubernetes, Spinnaker. - Experience with Cloud platforms like OCI, AWS, Azure. - Proficiency in software engineering best practices and agile methodologies. - Ability to effectively communicate and build rapport with team members and clients. - Minimum 3-5 years of relevant experience. Preferred Qualifications: - Knowledge and experience with Cloud platforms like OCI, AWS, Azure. - Proficiency in writing documentation and automated tests using frameworks like Cucumber or Selenium. About Oracle: Oracle is a global leader in cloud solutions, leveraging cutting-edge technology to address current challenges. The company values diverse perspectives and backgrounds, fostering innovation and inclusivity within its workforce. With a commitment to integrity and employee well-being, Oracle offers competitive benefits and encourages community involvement through volunteer programs. The company is dedicated to promoting inclusivity, including individuals with disabilities, throughout the employment process. If you need accessibility assistance or accommodation due to a disability, please contact us at +1 888 404 2494, option one. Disclaimer: Oracle is an Affirmative Action Employer in the United States.,

Posted 1 week ago

Apply

3.0 - 6.0 years

8 - 12 Lacs

mumbai, mumbai suburban, mumbai (all areas)

Work from Office

We are looking for a Hadoop Developer with hands-on experience in managing, developing data solutions on Hadoop ecosystems. The candidate should have strong technical expertise in data lake design, data pipelines, and real-time/batch data processing. Required Candidate profile 3+ Yrs in Manage & support Cloudera Hadoop on-premise clusters. Work on data modelling, governance, migrations, application development. Proficiency in Spark, Hive, Impala, Kafka, related tools. Perks and benefits To be disclosed post interview

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a Business Analytics Analyst (Officer) at Citi in the Internal Audit Analytics Team, you will be an integral part of the Digital Solutions and Innovation (DSI) team, focusing on leveraging analytics to enhance audit efficiency and effectiveness. Your role involves working closely with Internal Audit members to identify opportunities, develop analytics, and automate activities to improve audit performance and coverage. Your key responsibilities will include actively participating in audit analytics initiatives, defining data needs, executing audit analytics in alignment with professional standards, and supporting the automation of audit testing processes. You will collaborate with audit teams to conduct audits in various banking areas such as Consumer Banking, Investment Banking, Risk, Finance, Compliance, and Technology. Additionally, you will contribute to the development and execution of innovative solutions and analytics, promoting continuous improvement in audit automation activities. To excel in this role, you should possess at least 3 years of experience in business or audit analysis, with a strong grasp of analytics tools and techniques. Your technical proficiency in areas such as SQL, Python, Hadoop ecosystem, and Alteryx will be essential. Excellent communication skills are crucial for effectively articulating analytics requirements and results, as well as for building professional relationships with audit and business teams. Moreover, your role will involve collaborating with technology and business teams to enhance process understanding and data sourcing. You will be expected to have a detail-oriented approach, ensuring accuracy and completeness in your work, along with a proactive problem-solving mindset. Additionally, experience in data visualization tools like Tableau, MicroStrategy, or Cognos, and knowledge of areas like business intelligence, data science, and big data analysis, will be advantageous. At Citi, you will have the opportunity to work in a highly innovative environment, leveraging the latest technologies while benefiting from a global professional development platform. Our inclusive corporate culture values diversity and equality, providing a supportive workplace for all professionals. With competitive benefits and a focus on continuous learning and development, this role offers a rewarding and challenging career path within Citis products and services landscape.,

Posted 2 weeks ago

Apply

6.0 - 8.0 years

25 - 30 Lacs

bengaluru

Work from Office

6+ years of experience in information technology, Minimum of 3-5 years of experience in managing and administering Hadoop/Cloudera environments. Cloudera CDP (Cloudera Data Platform), Cloudera Manager, and related tools. Hadoop ecosystem components (HDFS, YARN, Hive, HBase, Spark, Impala, etc.). Linux system administration with experience with scripting languages (Python, Bash, etc.) and configuration management tools (Ansible, Puppet, etc.) Tools like Kerberos, Ranger, Sentry), Docker, Kubernetes, Jenkins Cloudera Certified Administrator for Apache Hadoop (CCAH) or similar certification. Cluster Management, Optimization, Best practice implementation, collaboration and support.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Governance Architect at Tiger Analytics, you will play a crucial role in designing, architecting, deploying, and maintaining big data-based data governance solutions. Your responsibilities will include technical management throughout the project life cycle, collaboration with various teams, exploring new technologies, and leading a team of data governance engineers. You are expected to have a minimum of 10 years of technical experience, with at least 5 years in the Hadoop ecosystem and 3 years in Data Governance Solutions. Hands-on experience with Data Governance Solutions is essential, along with a good understanding of data catalog, business glossary, metadata, data quality, data profiling, and data lineage. Expertise in technologies such as the Hadoop ecosystem (HDFS, Hive, Sqoop, Kafka, ELK Stack), Spark, Scala, Python, core/advanced Java, and relevant AWS/GCP components is required. Familiarity with Databricks, Snowflake, designing/building cloud-computing infrastructure solutions, data lake design, full life cycle of a Hadoop solution, distributed computing, HDFS administration, and configuration management is a plus. At Tiger Analytics, we value diversity and inclusivity. We encourage individuals with varying skill sets and qualities to apply, even if they do not meet all the criteria for the role. We are an equal-opportunity employer, and our diverse culture and values promote growth and development tailored to individual aspirations. Your designation and compensation will be determined based on your expertise and experience. We offer competitive compensation packages and additional benefits such as health insurance, virtual wellness platforms, car lease programs, and opportunities to engage with knowledge communities. Join us at Tiger Analytics to be part of a dynamic team that is dedicated to pushing the boundaries of AI and analytics to create real outcomes for businesses.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be working as a Data Architect at Niveus Solutions, a dynamic organization focused on utilizing data for business growth and decision-making. Your role will be crucial in designing, building, and maintaining robust data platforms on Azure and GCP. As a Senior Data Architect, your responsibilities will include: - Developing and implementing comprehensive data architectures such as data warehouses, data lakes, and data lake houses on Azure and GCP. - Designing data models aligned with business requirements to support efficient data analysis and reporting. - Creating and optimizing ETL/ELT pipelines using tools like Databricks, Azure Data Factory, or GCP Data Fusion. - Designing scalable data warehouses on Azure Synapse Analytics or GCP BigQuery for enterprise reporting and analytics. - Implementing data lakehouses on Azure Databricks or GCP Dataproc for unified data management and analytics. - Utilizing Hadoop components for distributed data processing and analysis. - Establishing data governance policies to ensure data quality, security, and compliance. - Writing scripts in Python, SQL, or Scala to automate data tasks and integrate with other systems. - Demonstrating expertise in Azure and GCP cloud platforms and mentoring junior team members. - Collaborating with stakeholders, data analysts, and developers to deliver effective data solutions. Qualifications required for this role: - Bachelor's degree in Computer Science, Data Science, or related field. - 5+ years of experience in data architecture, data warehousing, and data lakehouse implementation. - Proficiency in Azure and GCP data services, ETL/ELT tools, and Hadoop components. - Strong scripting skills in Python, SQL, and Scala. - Experience in data governance, compliance frameworks, and excellent communication skills. Bonus points for: - Certifications in Azure Data Engineer Associate or GCP Data Engineer. - Experience in real-time data processing, data visualization tools, and cloud-native data platforms. - Knowledge of machine learning and artificial intelligence concepts. If you are a passionate data architect with a successful track record in delivering data solutions, we welcome you to apply and be a part of our data-driven journey at Niveus Solutions.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You should have a Bachelors Degree in Computer Science, Computer Engineering or related technical field, while a Masters Degree or other advanced degree is preferred. With 4-6+ years of total experience, you should possess at least 2+ years of relevant experience in Big Data platforms. Your skill set should include strong analytical, problem solving, and communication/articulation skills. Furthermore, you are expected to have 3+ years of experience with big data and the Hadoop ecosystem, including Spark, HDFS, Hive, Sqoop, Hudi, Parquet, Apache Nifi, and Kafka. Proficiency in Scala/Spark is required, and knowledge of Python is considered a plus. Hands-on experience with Oracle and MS-SQL databases is essential. In addition, you should have experience working with job schedulers like CA or AutoSys, as well as familiarity with source code control systems such as Git, Jenkins, and Artifactory. Experience with platforms like Tableau and AtScale will be an advantage in this role.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

The Applications Development Intermediate Programmer Analyst position is a role at an intermediate level where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include utilizing your knowledge of applications development procedures and concepts, as well as basic knowledge of other technical areas to identify and define necessary system enhancements. This will involve using script tools, analyzing/interpreting code, consulting with users, clients, and other technology groups on issues, recommending programming solutions, installing, and supporting customer exposure systems. You will also need to apply fundamental knowledge of programming languages for design specifications, analyze applications to identify vulnerabilities and security issues, conduct testing and debugging, and serve as an advisor or coach to new or lower-level analysts. In this role, you will need to identify problems, analyze information, make evaluative judgments to recommend and implement solutions, resolve issues by identifying and selecting solutions through the application of acquired technical experience, operate with a limited level of direct supervision, and exercise independence of judgment and autonomy. Additionally, you will act as a Subject Matter Expert to senior stakeholders and/or other team members. You should have 4-6 years of proven experience in developing and managing Big data solutions using Apache Spark and Scala. It is essential to have a strong hold on Spark-core, Spark-SQL, and Spark Streaming, along with strong programming skills in Scala, Java, or Python. Hands-on experience with technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume, etc., proficiency in SQL, experience with relational databases (Oracle/PL-SQL), and familiarity with data warehousing concepts and ETL processes are required. You should also have experience in performance tuning of large technical solutions, knowledge of data modeling, data architecture, data integration techniques, and best practices for data security, privacy, and compliance. Furthermore, experience with JAVA, Web services, Microservices, SOA, Apache Spark, Hive, SQL, and the Hadoop ecosystem is necessary. You should have experience with developing frameworks and utility services, delivering high-quality software following continuous delivery, and using code quality tools. Experience in creating large-scale, multi-tiered, distributed applications with Hadoop and Spark, as well as knowledge of implementing different data storage solutions, is also expected. The ideal candidate will have a Bachelor's degree or University degree or equivalent experience. Please note that this job description provides a high-level overview of the work performed, and other job-related duties may be assigned as required.,

Posted 1 month ago

Apply

7.0 - 11.0 years

0 Lacs

telangana

On-site

You are a highly skilled and detail-oriented ETL QA - Technical Lead with a solid background in Big Data Testing, the Hadoop ecosystem, and SQL validation. Your primary responsibility will be leading end-to-end testing efforts for data/ETL pipelines across various big data platforms. You will be working closely with cross-functional teams in an Agile environment to ensure the quality and integrity of large-scale data solutions. Your key responsibilities include designing and implementing test strategies for validating large datasets, transformations, and integrations. You will be hands-on testing Hadoop-based data platforms such as HDFS, Hive, and Spark. Additionally, you will develop complex SQL queries for data validation and business rule testing. Collaborating with developers, product owners, and business analysts in Agile ceremonies will also be a crucial part of your role. As the ETL QA - Technical Lead, you will own test planning, test case design, defect tracking, and reporting for assigned modules. Identifying areas of automation and building reusable QA assets will be essential, along with driving QA best practices and mentoring junior QA team members. To excel in this role, you should have 7-11 years of experience in Software Testing, with a minimum of 3 years in Big Data/Hadoop testing. Strong hands-on experience in testing Hadoop components like HDFS, Hive, Spark, and Sqoop is required. Proficiency in SQL, especially in complex joins, aggregations, and data validation, is essential. Experience in ETL/Data Warehouse testing and familiarity with data ingestion, transformation, and validation techniques are also necessary.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

The Applications Development Intermediate Programmer Analyst position is an intermediate level role where you will be responsible for contributing to the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to assist in applications systems analysis and programming activities. You will utilize your knowledge of applications development procedures and concepts, along with basic knowledge of technical areas, to identify and define necessary system enhancements. This includes using script tools, analyzing code, and consulting with users, clients, and other technology groups to recommend programming solutions. Additionally, you will install and support customer exposure systems and apply fundamental knowledge of programming languages for design specifications. As an Intermediate Programmer Analyst, you will analyze applications to identify vulnerabilities and security issues, conduct testing and debugging, and serve as an advisor or coach to new or lower-level analysts. You will be responsible for identifying problems, analyzing information, and making evaluative judgments to recommend and implement solutions. Operating with a limited level of direct supervision, you will exercise independence of judgment and autonomy while acting as a subject matter expert to senior stakeholders and/or other team members. In this role, it is crucial to appropriately assess risk when making business decisions, with a focus on safeguarding Citigroup, its clients, and assets. This includes driving compliance with applicable laws, rules, and regulations, adhering to policies, applying sound ethical judgment, and escalating, managing, and reporting control issues with transparency. Qualifications: - 4-6 years of proven experience in developing and managing Big Data solutions using Apache Spark and Scala is required - Strong programming skills in Scala, Java, or Python - Hands-on experience with technologies like Apache Hive, Apache Kafka, HBase, Couchbase, Sqoop, Flume, etc. - Proficiency in SQL and experience with relational databases (Oracle/PL-SQL) - Experience in working on Kafka, JMS/MQ applications - Familiarity with data warehousing concepts and ETL processes - Knowledge of data modeling, data architecture, and data integration techniques - Experience with Java, Web services, XML, JavaScript, Microservices, SOA, etc. - Strong technical knowledge of Apache Spark, Hive, SQL, and the Hadoop ecosystem - Experience with developing frameworks and utility services, logging/monitoring, and high-quality software delivery - Experience creating large-scale, multi-tiered, distributed applications with Hadoop and Spark - Profound knowledge of implementing different data storage solutions such as RDBMS, Hive, HBase, Impala, and NoSQL databases Education: - Bachelor's degree or equivalent experience This job description provides a high-level overview of the responsibilities and qualifications for the Applications Development Intermediate Programmer Analyst position. Other job-related duties may be assigned as required.,

Posted 1 month ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As an ETL Testing & Big Data professional, you will be responsible for designing and implementing ETL test strategies based on business requirements. Your role involves reviewing and analyzing ETL source code, as well as developing and executing test plans and test cases for ETL processes. Data validation and reconciliation using SQL queries will be a key aspect of your responsibilities. Monitoring ETL jobs, resolving issues affecting data accuracy, and performing performance testing on ETL processes to focus on optimization are crucial tasks in this role. Ensuring data quality and integrity across various data sources, along with coordinating with development teams to troubleshoot issues and suggest improvements, are essential for success. You will be expected to utilize automation tools to enhance the efficiency of testing processes and conduct regression testing after ETL releases or updates. Documenting test results, issues, and proposals for resolution, as well as providing support to business users regarding data-related queries, are integral parts of your responsibilities. Staying updated with the latest trends in ETL testing and big data technologies, working closely with data architects to ensure effective data modeling, and participating in technical discussions to contribute to knowledge sharing are key aspects of this role. Qualifications: - Bachelor's degree in Computer Science, Information Technology, or a related field. - 3+ years of experience in ETL testing and big data environments. - Strong proficiency in SQL and data modeling techniques. - Hands-on experience with Hadoop ecosystem and related tools. - Familiarity with ETL tools such as Informatica, Talend, or similar. - Experience with data quality frameworks and methodologies. - Knowledge of big data technologies like Spark, Hive, or Pig. - Excellent analytical and problem-solving skills. - Proficient communication skills for effective collaboration. - Ability to manage multiple tasks and meet deadlines efficiently. - Experience in Java or scripting languages is a plus. - Strong attention to detail and a commitment to delivering quality work. - Certifications in data management or testing are a plus. - Ability to work independently and as part of a team. - Willingness to adapt to evolving technologies and methodologies. Skills required: - Scripting languages - Data modeling - Data quality frameworks - Hive - Talend - Analytical skills - SQL - Performance testing - Automation tools - Pig - Hadoop ecosystem - ETL testing - Informatica - Hadoop - Data quality - Big data - Java - Regression testing - Spark,

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

indore, madhya pradesh

On-site

You should hold a Bachelor's degree in Physics, Mathematics, Engineering, Metallurgy, or Computer Science, along with an MSc in a relevant field such as Physics, Mathematics, Engineering, Computer Science, Chemistry, or Metallurgy. Additionally, you should possess at least 8 years of experience in Data Science and Analytics delivery. Your expertise should include deep knowledge of machine learning, statistics, optimization, and related fields. Proficiency in programming languages like R and Python is essential, as well as experience with machine learning skills such as Natural Language Processing (NLP) and deep learning techniques. Furthermore, you should have hands-on experience with deep learning frameworks like TensorFlow, Keras, Theano, or PyTorch, and be familiar with working with large datasets, including knowledge of extracting data from cloud platforms and the Hadoop ecosystem. Experience in Data Visualization tools like MS Power BI or Tableau, as well as proficiency in SQL and working with RDBMS for data extraction and management, is required. An understanding of Data Warehouse fundamentals, experience in productionizing Machine Learning models in cloud platforms like Azure, GCP, or AWS, and domain experience in the manufacturing industry would be advantageous. Demonstrated leadership skills in nurturing technical talent, successfully completing complex data science projects, and excellent written and verbal communication are essential. As an AI Expert with a minimum of 10 years of experience, your key responsibilities will include serving as a technical expert, providing guidance in the development and implementation of AI solutions, and collaborating with cross-functional teams to integrate AI technologies into products and services. You will actively participate in Agile methodologies, contribute to PI planning, and support the technical planning of products. Additionally, you will analyze technical requirements, propose AI-based solutions, collaborate with stakeholders to design AI models that meet business objectives, and stay updated on the latest advancements in AI technologies. Your role will involve conducting code reviews, mentoring team members, and driving the adoption of AI technologies across the organization. Strong problem-solving skills, a proactive approach to problem resolution, and the ability to work under tight deadlines without compromising quality are crucial for this role. Overall, you will play a critical role in driving significant impact and value in building and growing the Data Science Centre of Excellence, providing machine learning methodology leadership, and designing various POCs using ML/DL/NLP solutions for enterprise problems. Your ability to learn new technologies and techniques, work in a fast-paced environment, and partner with the business to unlock value through data projects will be key to your success in this position.,

Posted 1 month ago

Apply

5.0 - 10.0 years

0 Lacs

karnataka

On-site

As an experienced professional with 5 to 10 years of experience in the field of information technology, you will be responsible for creating data models for corporate analytics in compliance with standards, ensuring usability and conformance across the enterprise. Your role will involve developing data strategies, ensuring vocabulary consistency, and managing data transformations through intricate analytical relationships and access paths, including data mappings at the data-field level. Collaborating with Product Management and Business stakeholders, you will identify and evaluate data sources necessary to achieve project and business objectives. Working closely with Tech Leads and Product Architects, you will gain insights into end-to-end data implications, data integration, and the functioning of business systems. Additionally, you will collaborate with DQ Leads to address data integrity improvements and quality resolutions at the source. This role requires domain knowledge in supply chain, retail, or inventory management. The critical skills needed for this position include a strong understanding of various software platforms and development technologies, proficiency in SQL, RDBMS, Data Lakes, and Warehouses, and knowledge of the Hadoop ecosystem, Azure, ADLS, Kafka, Apache Delta, and Databricks/Spark. Experience with data modeling tools like ERStudio or Erwin would be advantageous. Effective collaboration with Product Managers, Technology teams, and Business Partners, along with familiarity with Agile and DevOps techniques, is essential. Excellent communication skills, both written and verbal, are also key for success in this role. Preferred qualifications for this position include a bachelor's degree in business information technology, computer science, or a related discipline. This is a full-time position located in Bangalore, Bengaluru, Delhi, Kolkata, or Navi Mumbai. If you meet these requirements and are interested in this opportunity, please apply online. The digitalxnode evaluation team will review your resume, and if your profile is selected, they will reach out to you for further steps. We will retain your information in our database for future job openings.,

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

We are looking for a talented Python QA Automation Engineer with expertise in cloud technologies, specifically Google Cloud Platform (GCP). As a Python QA Automation Engineer, you will be responsible for designing, implementing, and maintaining automated testing frameworks to ensure the quality and reliability of software applications deployed on GCP. This role requires a strong background in Python programming, QA automation, and cloud-based environments. You will collaborate with internal teams to solve complex problems in quality and development, while gaining a deep understanding of networking and access technologies in the Cloud. Your responsibilities will include leading or contributing to engineering efforts, from planning to execution, to address engineering challenges effectively. To be successful in this role, you should have 4 to 8 years of experience in test development and automation tools development. You will design and build advanced automated testing frameworks, tools, and test suites. Proficiency in GoLang programming, experience with Google Cloud Platform, Kubernetes, Docker, Helm, Ansible, and building internal tools are essential. Additionally, you should have expertise in backend testing, creating test cases and test plans, and defining optimal test suites for various testing scenarios. Experience in CI/CD pipelines, Python programming, Linux environments, PaaS and/or SaaS platforms, and the Hadoop ecosystem is advantageous. A solid understanding of computer science fundamentals and data structures is required. Excellent communication and collaboration skills are necessary for effective teamwork. Benefits of joining our team include a competitive salary and benefits package, talent development opportunities, exposure to cutting-edge technologies, and various employee engagement initiatives. We are committed to fostering diversity and inclusion in the workplace, offering hybrid work options, flexible hours, and accessible facilities for employees with disabilities. If you are ready to accelerate your growth professionally and personally, impact the world with innovative technologies, and thrive in a diverse and inclusive environment, join us at Persistent. Unlock your full potential and embark on a rewarding career journey with us.,

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

The Applications Development Senior Programmer Analyst position is a vital role where you will participate in establishing and implementing new or revised application systems and programs in collaboration with the Technology team. Your main goal will be to contribute to applications systems analysis and programming activities. Your responsibilities will include conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will be responsible for monitoring and controlling all phases of the development process including analysis, design, construction, testing, and implementation. Furthermore, providing user and operational support on applications to business users will also be part of your role. You will need to utilize your in-depth specialty knowledge of applications development to analyze complex problems/issues, evaluate business processes, system processes, and industry standards, and make evaluative judgments. It will be your responsibility to recommend and develop security measures in post-implementation analysis of business usage to ensure successful system design and functionality. Consulting with users/clients and other technology groups on issues, recommending advanced programming solutions, and assisting in the installation of customer exposure systems will also be part of your duties. Additionally, you will ensure that essential procedures are followed, help define operating standards and processes, and serve as an advisor or coach to new or lower-level analysts. You will be expected to operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a Subject Matter Expert (SME) to senior stakeholders and/or other team members. Your qualifications should include 8+ years of Development experience with expertise in Hadoop Ecosystem, Java Server-side development, Scala programming, Spark expertise, Data Analysis using SQL, a financial background, Python, Linux, proficiency in Reporting Tools like Tableau, Stakeholder Management, and a history of delivering against agreed objectives. You should also possess the ability to multitask, work under pressure, pick up new concepts and apply knowledge, demonstrate problem-solving skills, have an enthusiastic and proactive approach with a willingness to learn, and have excellent analytical and process-based skills. Ideally, you should hold a Bachelor's degree or equivalent experience. Please note that this job description provides a high-level review of the types of work performed, and other job-related duties may be assigned as required.,

Posted 1 month ago

Apply

3.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

You should have strong experience in PySpark, Python, Unix scripting, SparkSQL, and Hive. You must be proficient in writing SQL queries, creating views, and possess excellent oral and written communication skills. Prior experience in the Insurance domain would be beneficial. A good understanding of the Hadoop Ecosystem including HDFS, Map Reduce, Pig, Hive, Oozie, and Yarn is required. Knowledge of AWS services such as Glue, AWS S3, Lambda function, Step Function, and EC2 is essential. Experience in data migration from platforms like Hive/S3 to Data Bricks is a plus. You should be able to prioritize, plan, organize, and manage multiple tasks efficiently while delivering high-quality work. As a candidate, you should have 6-8 years of technical experience in PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), with at least 3 years of experience in Big Data/ETL using Python, Spark, and Hive, along with 3+ years of experience in AWS. Your primary key skills should include PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), and Big Data with Python, Spark, and Hive experience. Exposure to Big Data migration is also important. Secondary key skills that would be beneficial for this role include Informatica BDM/Power center, Data Bricks, and MongoDB.,

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

As a skilled candidate for this position, you should possess a minimum of 8 to 10 years of experience in Java, REST API, and Spring Boot. Additionally, you must have hands-on experience with AngularJS, ReactJS, or VueJS. A bachelor's degree or higher in computer science, data science, or a related field is required. Your role will involve working with data cleaning, visualization, and reporting, requiring practical experience in these areas. Previous exposure to an agile environment is essential for success in this position. Your excellent analytical and problem-solving skills will be key assets in meeting the job requirements. In addition to the mandatory qualifications, familiarity with the Hadoop ecosystem and experience with AWS (EMR) would be advantageous. Ideally, you should have a minimum of 2 years of experience with real-time data stream platforms like Kafka and Spark Streaming. Your ability to navigate and utilize the context menu efficiently will also be beneficial in this role. Excellent communication and interpersonal skills will be necessary for effective collaboration within the team and with stakeholders.,

Posted 1 month ago

Apply

8.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Java Developer, you will be responsible for utilizing your 8 to 10 years of experience in Java, REST API, and Spring boot to develop efficient and scalable solutions. Your expertise in Angular JS, React JS, or View JS will be essential for creating dynamic and interactive user interfaces. A Bachelors degree or higher in computer science, data science, or a related field is required to ensure a strong foundation in software development. Your role will involve hands-on experience with data cleaning, visualization, and reporting, enabling you to contribute to data-driven decision-making processes. Working in an agile environment, you will apply your excellent analytical and problem-solving skills to address complex technical challenges effectively. Your communication and interpersonal skills will be crucial for collaborating with team members and stakeholders. Additionally, familiarity with the Hadoop ecosystem and experience with AWS (EMR) would be advantageous. Having at least 2 years of relevant experience with real-time data stream platforms like Kafka and Spark Streaming will further enhance your capabilities in building real-time data processing solutions. If you are a proactive and innovative Java Developer looking to work on cutting-edge technologies and contribute to impactful projects, this role offers an exciting opportunity for professional growth and development.,

Posted 1 month ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies