Jobs
Interviews

548 Hbase Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

7 - 11 Lacs

Tiruvannamalai, Chennai, Vellore

Work from Office

We are looking for a highly skilled and experienced Relationship Manager to join our team at Equitas Small Finance Bank. The ideal candidate will have 2-7 years of experience in the BFSI industry, with expertise in Assets, Inclusive Banking, SBL, Mortgages, Standalone Merchant OD, and Relationship Management. Roles and Responsibility Manage relationships with merchants and stakeholders to achieve business objectives. Develop and implement strategies to increase sales and revenue growth. Build and maintain strong relationships with existing clients to ensure customer satisfaction. Identify new business opportunities and expand the client base. Collaborate with internal teams to resolve customer complaints and issues. Analyze market trends and competitor activity to stay ahead in the market. Job Requirements Strong knowledge of Assets, Inclusive Banking, SBL, Mortgages, Standalone Merchant OD, and Relationship Management. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet sales targets. Strong analytical and problem-solving skills. Experience in managing merchant relationships and driving business growth. Familiarity with banking products and services is an added advantage. Location: Chennai,Vellore,Tiruvannamalai,Tirupathur

Posted 2 weeks ago

Apply

1.0 - 2.0 years

3 - 4 Lacs

Thanjavur, Tamil Nadu

Work from Office

We are looking for a highly motivated and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-2 years of experience in the BFSI industry, preferably with knowledge of Assets, Inclusive Banking, SBL, Mortgages, and Receivables. Roles and Responsibility Manage and oversee branch receivables operations for timely and accurate payments. Develop and implement strategies to improve receivables management and reduce delinquencies. Collaborate with cross-functional teams to resolve customer complaints and issues. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Maintain accurate records and reports of receivables transactions. Job Requirements Strong understanding of BFSI industry trends and regulations. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Proficiency in MS Office and other relevant software applications. Strong analytical and problem-solving skills. Experience working with branch receivables operations is preferred. Location - Inclusive Banking - SBL,South,Tamil Nadu,Kumbakonam,Tanjore,Tanjavur 1 (Area Office),Thanjavur,1279,Tanjavur 1

Posted 2 weeks ago

Apply

10.0 - 15.0 years

32 - 37 Lacs

Hubli, Karnataka

Work from Office

We are looking for a skilled Branch Receivables Manager to join our team at Equitas Small Finance Bank. The ideal candidate will have 10 years of experience in the BFSI industry, with expertise in Assets, Inclusive Banking, SBL, Mortgages, and Receivables. Roles and Responsibility Manage and oversee branch receivables operations for efficient cash flow. Develop and implement strategies to improve receivable management processes. Collaborate with cross-functional teams to resolve customer issues and enhance service quality. Analyze and report on receivable performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Lead and motivate a team of receivables professionals to achieve business objectives. Job Requirements Strong knowledge of BFSI industry trends and regulations. Proven experience in managing branch receivables operations. Excellent leadership and communication skills. Ability to analyze complex data sets and make informed decisions. Strong problem-solving and conflict resolution skills. Experience working with financial software and systems. Location - Inclusive Banking - SBL,South,Karnataka,Karnataka,Hubli,Hubli,Karnataka,3053,Dharwad

Posted 2 weeks ago

Apply

2.0 - 7.0 years

4 - 9 Lacs

Thuraiyur, Tamil Nadu

Work from Office

We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 2 to 7 years of experience in the BFSI industry, with expertise in Assets, Inclusive Banking, SBL, Mortgages, and Receivables. Roles and Responsibility Manage and oversee branch receivables operations for efficient cash flow. Develop and implement strategies to improve receivables management. Collaborate with cross-functional teams to resolve customer issues and enhance service quality. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Train and guide junior staff members to improve their skills and knowledge. Job Requirements Strong understanding of BFSI industry trends and regulations. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Proficiency in MS Office and other relevant software applications. Strong analytical and problem-solving skills. Experience in managing and leading a team of receivables professionals. Location - Inclusive Banking - SBL,South,Tamil Nadu,Kumbakonam,Perambalur,Thuraiyur,Musiri,1060,Musiri

Posted 2 weeks ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Hubli, Karnataka

Work from Office

We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 4 to 9 years of experience in the BFSI industry, with expertise in Assets, Inclusive Banking, SBL, Mortgages, and Receivables. Roles and Responsibility Manage and oversee branch receivables operations for efficient cash flow. Develop and implement strategies to improve receivables management. Collaborate with cross-functional teams to resolve customer issues and enhance service quality. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Train and guide junior staff members to improve their skills and knowledge. Job Requirements Strong understanding of BFSI industry trends and regulations. Experience in managing assets, inclusive banking, SBL, mortgages, and receivables. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong analytical and problem-solving skills. Proficiency in Microsoft Office and other relevant software applications. Location - Inclusive Banking - SBL,South,Karnataka,Karnataka,Hubli,Hubli,Karnataka,3067,Belgaum

Posted 2 weeks ago

Apply

1.0 - 4.0 years

3 - 6 Lacs

Chitradurga, Karnataka

Work from Office

We are looking for a highly skilled and experienced Branch Receivable Officer to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-4 years of experience in the BFSI industry, preferably with a background in Assets, Inclusive Banking, SBL, Mortgages, or Receivables. Roles and Responsibility Manage and oversee branch receivables operations for efficient cash flow. Develop and implement strategies to improve receivables management. Collaborate with cross-functional teams to resolve customer issues and enhance service quality. Analyze and report on receivables performance metrics to senior management. Ensure compliance with regulatory requirements and internal policies. Train and guide junior staff members on receivables procedures and best practices. Job Requirements Strong knowledge of BFSI industry trends and regulations. Experience in managing branch receivables operations and improving cash flow. Excellent communication and interpersonal skills. Ability to work in a fast-paced environment and meet deadlines. Strong analytical and problem-solving skills. Proficiency in Microsoft Office and other relevant software applications. Location - Inclusive Banking - SBL,South,Karnataka,Karnataka,Chitradurga,Bellary,Karnataka,3071,Chitradurga

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Kolhapur, Pune, Mondha

Work from Office

We are looking for a skilled professional to join our team as an Area Receivables Manager in Equitas Small Finance Bank. The ideal candidate will have 3 to 8 years of experience in the BFSI industry, with expertise in Inclusive Banking, SBL, and Mortgages. Roles and Responsibility Manage and oversee the receivable portfolio for the assigned area, ensuring timely recovery of outstanding amounts. Develop and implement strategies to improve collection efficiency and reduce delinquencies. Collaborate with cross-functional teams to resolve customer complaints and disputes. Analyze market trends and competitor activity to identify opportunities for growth and improvement. Monitor and report on key performance indicators, such as delinquency rates and collection efficiencies. Provide guidance and support to junior team members to enhance their skills and knowledge. Job Requirements Strong understanding of Inclusive Banking principles and practices. Experience working with SBL (Supervisory Board) and other regulatory bodies. Proven track record of managing and analyzing large datasets, particularly in mortgages and receivables. Excellent communication and interpersonal skills, with the ability to work effectively with customers and stakeholders. Strong problem-solving and analytical skills, with attention to detail and the ability to meet deadlines. Ability to work in a fast-paced environment and adapt to changing priorities and deadlines.

Posted 2 weeks ago

Apply

1.0 - 2.0 years

7 - 11 Lacs

Kumbakonam, Tiruvarur

Work from Office

We are looking for a highly skilled and experienced Relationship Manager to join our team at Equitas Small Finance Bank. The ideal candidate will have 1-2 years of experience in the BFSI industry, preferably with a background in Inclusive Banking, SBL, Mortgages, or Standalone Merchant OD. Roles and Responsibility Manage relationships with merchants and other stakeholders to achieve business objectives. Develop and implement strategies to increase sales and revenue growth. Build and maintain strong relationships with existing clients to ensure customer satisfaction. Identify new business opportunities and expand the client base. Collaborate with internal teams to resolve customer complaints and issues. Analyze market trends and competitor activity to stay ahead in the market. Job Requirements Strong knowledge of Inclusive Banking, SBL, Mortgages, or Standalone Merchant OD products. Excellent communication and interpersonal skills to build strong relationships. Ability to work in a fast-paced environment and meet sales targets. Strong analytical and problem-solving skills to analyze market trends. Experience working with merchant OD or similar roles is preferred. Ability to work independently and as part of a team to achieve business objectives.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

14 - 17 Lacs

Pune

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 weeks ago

Apply

4.0 - 6.0 years

3 - 7 Lacs

Bengaluru

Work from Office

We're looking for talented software engineers with a passion for learning and a systems-oriented view of software engineering to join our team working full-time on the core of our proprietary database based on Apache Cassandra. In this position you will be working in an important role on a complex infrastructure project used by many major organizations across the world, and collaborating with fellow engineers to improve the project. If you want to work on the most interesting problems of your career with the most collaborative and skilled peers you've ever worked with, this might be the role for you! You'll take on a critical role on the core of our platform, working on enhancements and bug-fixes on our multi-model distributed database. Engineers on this team collaborate extensively with internal teams across to coordinate for releases, support existing customers through defect fixes and improvements, and review and advise on documentation for the project. We’re looking for engineers that have a knack for untangling complex knots in code-bases and concurrent systems, with expertise in a C-lineage language (Java, Scala, Kotlin, C#, C++, Rust, etc). A gut passion for quality, elegance, performance and simplicity in solutions and code is critical in this role.If you're comfortable in navigating multi-threaded, large distributed systems at scale this will be a great fit. What you will do: Author, debug, and improve code in the core of DataStax Enterprise Cassandra Actively and self-driven collaborate with other engineers, field team and support members Work on maintenance, bug fixes , new feature development and improvements to the platform Help prepare different teams for DSE releases (documentation, field, etc) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-6 years of relevant experience Expertise in at least one C-lineage language that supports OOP and FP (Java, Scala, Kotlin, C#, C++, Rust, etc) Ability to work autonomously, self-manage your time, and to an extent self-direct when given high level strategic priorities Ability to communicate clearly with peers and stakeholders verbally and via text (video calls, JIRA, Slack, email) Demonstrated ability to focus on analytical tasks such as finding issues in a huge, distributed system A desire to learn and grow daily, both technically and w/soft-skills interpersonally An open-minded and collaborative attitude Preferred technical and professional experience Expertise in Java and Scala programming on the JVM Experience with concurrency, memory management and I/O Experience with Linux or other Unix-like systems Experience with distributed databases, DataStax Enterprise or Apache Cassandra in particular Experience with distributed computing platforms, Apache Spark in particular

Posted 2 weeks ago

Apply

9.0 - 14.0 years

4 - 8 Lacs

Bengaluru

Work from Office

We're looking for talented software engineers with a passion for learning and a systems-oriented view of software engineering to join our team working full-time on the core of our proprietary database based on Apache Cassandra. In this position you will be working in an important role on a complex infrastructure project used by many major organizations across the world, and collaborating with fellow engineers to improve the project. If you want to work on the most interesting problems of your career with the most collaborative and skilled peers you've ever worked with, this might be the role for you! You'll take on a critical role on the core of our platform, working on enhancements and bug-fixes on our multi-model distributed database. Engineers on this team collaborate extensively with internal teams across to coordinate for releases, support existing customers through defect fixes and improvements, and review and advise on documentation for the project. We’re looking for engineers that have a knack for untangling complex knots in code-bases and concurrent systems, with expertise in a C-lineage language (Java, Scala, Kotlin, C#, C++, Rust, etc). A gut passion for quality, elegance, performance and simplicity in solutions and code is critical in this role.If you're comfortable in navigating multi-threaded, large distributed systems at scale this will be a great fit. What you will do: Author, debug, and improve code in the core of DataStax Enterprise Cassandra Actively and self-driven collaborate with other engineers, field team and support members Work on maintenance, bug fixes , new feature development and improvements to the platform Help prepare different teams for DSE releases (documentation, field, etc) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 7 – 9 + years of relevant work experience Expertise in at least one C-lineage language that supports OOP and FP (Java, Scala, Kotlin, C#, C++, Rust, etc) Ability to work autonomously, self-manage your time, and to an extent self-direct when given high level strategic priorities Ability to communicate clearly with peers and stakeholders verbally and via text (video calls, JIRA, Slack, email) Demonstrated ability to focus on analytical tasks such as finding issues in a huge, distributed system A desire to learn and grow daily, both technically and w/soft-skills interpersonally An open-minded and collaborative attitude Preferred technical and professional experience Expertise in Java and Scala programming on the JVM Experience with concurrency, memory management and I/O Experience with Linux or other Unix-like systems Experience with distributed databases, DataStax Enterprise or Apache Cassandra in particular Experience with distributed computing platforms, Apache Spark in particular

Posted 2 weeks ago

Apply

5.0 - 7.0 years

14 - 18 Lacs

Mumbai

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on Azure Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Exposure to streaming solutions and message brokers like Kafka technologies Experience Unix / Linux Commands and basic work experience in Shell Scripting Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers

Posted 2 weeks ago

Apply

3.0 - 7.0 years

10 - 14 Lacs

Pune

Work from Office

Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices, Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)- Spark Good to have Python. Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands, Experience in Concurrent design and multi-threading. Preferred technical and professional experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As an Associate Product Support Engineer focused on Hadoop distributed systems, you will be responsible for providing technical support to enterprise clients. Your main tasks will involve troubleshooting and resolving issues within Hadoop environments, ensuring the stability and reliability of our customers" infrastructures. Working closely with experienced engineers, you will collaborate with customers to understand their problems and deliver effective solutions with empathy and professionalism. Your key responsibilities will include addressing customer issues related to Hadoop clusters, core components (HDFS, YARN, MapReduce, Hive, etc.), and performing basic administrative tasks such as installation, configuration, and upgrades. You will document troubleshooting steps and solutions for knowledge sharing purposes. To excel in this role, you should have a minimum of 3 years of hands-on experience as a Hadoop Administrator or in a similar support role. A strong understanding of Hadoop architecture and core components, along with proven experience in troubleshooting Hadoop-related issues, is essential. Proficiency in Linux operating systems, good communication skills, and excellent problem-solving abilities are also required. Experience with components like Spark, NiFi, and HBase, as well as exposure to data security and data engineering principles within Hadoop environments, will be advantageous. Furthermore, prior experience in a customer-facing technical support role and familiarity with tools like Salesforce & Jira are considered beneficial. Knowledge of automation and scripting languages like Python and Bash is a plus. This role offers an opportunity for candidates passionate about Hadoop administration and customer support to deepen their expertise in a focused, high-impact environment.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

The Applications Development Senior Programmer Analyst position is an intermediate level role where you will be responsible for participating in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. Your main objective will be to contribute to applications systems analysis and programming activities. Your responsibilities will include conducting tasks related to feasibility studies, time and cost estimates, IT planning, risk technology, applications development, model development, and establishing and implementing new or revised applications systems and programs to meet specific business needs or user areas. You will also be monitoring and controlling all phases of the development process, providing user and operational support on applications to business users, and recommending and developing security measures post-implementation to ensure successful system design and functionality. Furthermore, you will be utilizing in-depth specialty knowledge of applications development to analyze complex problems/issues, provide evaluation of business process, system process, and industry standards, and make evaluative judgments. You will consult with users/clients and other technology groups on issues, recommend advanced programming solutions, and ensure essential procedures are followed while defining operating standards and processes. As an Applications Development Senior Programmer Analyst, you will also serve as an advisor or coach to new or lower-level analysts, operate with a limited level of direct supervision, and exercise independence of judgment and autonomy. You will act as a subject matter expert to senior stakeholders and/or other team members and appropriately assess risk when making business decisions. Qualifications: - Must Have: - 8+ years of application/software development/maintenance - 5+ years of experience on Big Data Technologies like Apache Spark, Hive, Hadoop - Knowledge of Python, Java, or Scala programming language - Experience with JAVA, Web services, XML, Java Script, Microservices, etc. - Strong technical knowledge of Apache Spark, Hive, SQL, and Hadoop ecosystem - Experience with developing frameworks and utility services, code quality tools - Ability to work independently, multi-task, and take ownership of various analyses - Strong analytical and communication skills - Banking domain experience is a must - Good to Have: - Work experience in Citi or Regulatory Reporting applications - Hands-on experience on cloud technologies, AI/ML integration, and creation of data pipelines - Experience with vendor products like Tableau, Arcadia, Paxata, KNIME - Experience with API development and data formats Education: Bachelors degree/University degree or equivalent experience This job description provides a high-level overview of the work performed. Other job-related duties may be assigned as required.,

Posted 2 weeks ago

Apply

3.0 - 5.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Required Skills: 3-5 years years of experience in data engineering or related roles Experience with languages such as Python, Java, Ruby, C++, Perl, SQL, Hive, Scala, Spark, Kafka, • Experience in big data environments from Azure, AWS, or Google using Big Data technologies such as HDFS, Pig, HBase, Hive Experience with data warehousing and SQL and NoSQL database systems Experience with data modeling and enterprise Extraction, Transformation and Load (ETL) tools Experience with data APIs Experience with analytical problem solving JD Template for Data Engineer Proficiency in complex SQL query optimization, performance tuning, and enterprise ETL tools (Apache Airflow, dbt, Apache Beam, Luigi) with ability to design and implement scalable data transformation workflows Education/ Certifications: Bachelor's degree in computer science, Information Management or related field . Preferred Skills: • Work experience in the healthcare industry including clinical, financial and operational data

Posted 2 weeks ago

Apply

6.0 - 11.0 years

19 - 27 Lacs

Haryana

Work from Office

About Company Founded in 2011, ReNew, is one of the largest renewable energy companies globally, with a leadership position in India. Listed on Nasdaq under the ticker RNW, ReNew develops, builds, owns, and operates utility-scale wind energy projects, utility-scale solar energy projects, utility-scale firm power projects, and distributed solar energy projects. In addition to being a major independent power producer in India, ReNew is evolving to become an end-to-end decarbonization partner providing solutions in a just and inclusive manner in the areas of clean energy, green hydrogen, value-added energy offerings through digitalisation, storage, and carbon markets that increasingly are integral to addressing climate change. With a total capacity of more than 13.4 GW (including projects in pipeline), ReNew’s solar and wind energy projects are spread across 150+ sites, with a presence spanning 18 states in India, contributing to 1.9 % of India’s power capacity. Consequently, this has helped to avoid 0.5% of India’s total carbon emissions and 1.1% India’s total power sector emissions. In the over 10 years of its operation, ReNew has generated almost 1.3 lakh jobs, directly and indirectly. ReNew has achieved market leadership in the Indian renewable energy industry against the backdrop of the Government of India’s policies to promote growth of this sector. ReNew’s current group of stockholders contains several marquee investors including CPP Investments, Abu Dhabi Investment Authority, Goldman Sachs, GEF SACEF and JERA. Its mission is to play a pivotal role in meeting India’s growing energy needs in an efficient, sustainable, and socially responsible manner. ReNew stands committed to providing clean, safe, affordable, and sustainable energy for all and has been at the forefront of leading climate action in India. Job Description Key responsibilities: 1. Understand, implement, and automate ETL pipelines with better industry standards 2. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, design infrastructure for greater scalability, etc 3. Developing, integrating, testing, and maintaining existing and new applications 4. Design, and create data pipelines (data lake / data warehouses) for real world energy analytical solutions 5. Expert-level proficiency in Python (preferred) for automating everyday tasks 6. Strong understanding and experience in distributed computing frameworks, particularly Spark, Spark-SQL, Kafka, Spark Streaming, Hive, Azure Databricks etc 7. Limited experience in using other leading cloud platforms preferably Azure. 8. Hands on experience on Azure data factory, logic app, Analysis service, Azure blob storage etc. 9. Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works 10. Must have 5-7 years of experience

Posted 2 weeks ago

Apply

9.0 - 12.0 years

15 - 20 Lacs

Chennai

Work from Office

Job Title:Data Engineer Lead / Architect (ADF)Experience9-12YearsLocation:Remote / Hybrid : Role and ResponsibilitiesTalk to client stakeholders, and understand the requirements for building their data warehouse / data lake / data Lakehouse. Design, develop and maintain data pipelines in Azure Data Factory (ADF) for ETL from on-premise and cloud based sources Design, develop and maintain data warehouses and data lakes in Azure Run large data platform and other related programs to provide business intelligence support Design and Develop data models to support business intelligence solutions Implement best practices in data modelling and data warehousing Troubleshoot and resolve issues related to ETL and data connections Skills Required: Excellent written and verbal communication skills Excellent knowledge and experience in ADF Well versed with ADLS Gen 2 Knowledge of SQL for data extraction and transformation Ability to work with various data sources (Excel, SQL databases, APIs, etc.) Knowledge in SAS would be added advantage Knowledge in Power BI would be added advantage

Posted 2 weeks ago

Apply

8.0 - 12.0 years

7 - 11 Lacs

Pune

Work from Office

Experience with ETL processes and data warehousing Proficient in SQL and Python/Java/Scala Team Lead Experience

Posted 2 weeks ago

Apply

8.0 - 13.0 years

20 - 35 Lacs

Noida, Pune, Bengaluru

Work from Office

We are hiring BigData Lead for one of our client based in Noida / Indore / Bangalore / Hyderabad / Pune for a full time position and willing to hire immediately. Please share resume to rana@enormousenterprise.in / anjana@enormousenterprise.in Interview Mode Video Interview followed by Video interview or F2F interview Experience Level - 7 to 10 Years Minimum 2-5 Years of Lead Experience. Position Summary: We are looking for candidates with hands-on experience in Big Data or Cloud Technologies. Must have technical Skills 7 to 10 Years of experience Data Ingestion, Processing and Orchestration knowledge Expertise and hands-on experience on Spark DataFrame, and Hadoop echo system components – Must Have Good and hand-on experience* of any of the Cloud (AWS/Azure/GCP) – Must Have Good knowledge of PySpark (SparkSQL) – Must Have Good knowledge of Shell script & Python – Good to Have Good knowledge of SQL – Good to Have Good knowledge of migration projects on Hadoop – Good to Have Good Knowledge of one of the Workflow engine like Oozie, Autosys – Good to Have Good knowledge of Agile Development– Good to Have Passionate about exploring new technologies – Good to Have Automation approach - – Good to Have Good Communication Skills – Must Have Roles & Responsibilities Lead technical implementation of Data Warehouse modernization projects for Impetus Design and development of applications on Cloud technologies Lead technical discussions with internal & external stakeholders Resolve technical issues for team Ensure that team completes all tasks & activities as planned Code Development

Posted 2 weeks ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Bengaluru

Work from Office

Data Engineer Skills required : Bigdata Workflows (ETL/ELT), Python hands-on, SQL hands-on, Any Cloud (GCP & BigQuery preferred), Airflow (good knowledge on Airflow features, operators, scheduling etc) Skills that would add advantage : DBT, Kafka Experience level : 4 5 years NOTE Candidate will be having the coding test (Python and SQL) in the interview process. This would be done through coders-pad. Panel would set it at run-time.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software Requirements: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 5-12 years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice

Posted 2 weeks ago

Apply

5.0 - 10.0 years

5 - 9 Lacs

Pune

Work from Office

Where Data Does More. Join the Snowflake team. The Technical Instructor for the Snowflake Customer Education and Training Team will be responsible for creating and delivering compelling education contents and training sets that make complex concepts come alive in instructor-led classroom venues. The senior instructor will be seen as a subject matter expert and leader in transferring knowledge of Snowflake to customers, partners and internals and in accelerating their technical on-boarding journey. This role will also be responsible for the cross-training efforts, program management and help strategically ramp multiple resources within our external stakeholders. This role is a unique opportunity to contribute in a meaningful way to high value and high impact delivery at a very exciting time for the company. Snowflake is an innovative, high-growth, customer-focused company in a large and growing market. If you are an energetic, self-managed professional with experience teaching data courses to customers and possess excellent presentation and communication skills, we d love to hear from you. AS A TECHNICAL INSTRUCTOR AT SNOWFLAKE, YOU WILL: Teach a breadth of technical courses to onboard customers and partners to Snowflake, the data warehouse built for the Cloud Cross-train a breadth of technical courses to qualified individuals and resources The scope of course concepts may include foundational and advanced courses in the discipline which includes Snowflake data warehousing concepts, novel SQL capabilities, data consumption and connectivity interfaces, data integration and ingestion capabilities, database security features, database performance topics, Cloud ecosystem topics and more Apply database and data warehousing industry/domain/technology expertise and experience during training sessions to help customers and partners ease their organizations into the Snowflake data warehouse from prior database environments Deliver contents and cross train on delivery best practices using a variety of presentation formats including engaging lectures, live demonstration, and technical labs Work with customers and partners that are investing in the train the trainer program to certify their selected trainers ensuring they are well prepared and qualified to deliver the course at their organization Strong eye for design, making complex training concepts come alive in a blended educational delivery model Work with the education content developers to help prioritize, create, integrate, and publish training materials and hands-on exercises to Snowflake end users; drive continuous improvement of training performance Work with additional Snowflake subject-matter-experts in creating new education materials and updates to keep pace with Snowflake product updates OUR IDEAL TECHNICAL INSTRUCTOR WILL HAVE: Strong data warehouse and data-serving platform background Recent experience with using SQL including potentially in complex workloads 5-10 years of experience in technical content training development and delivery Strong desire and ability to teach and train Prior experience with other databases (e.g. Oracle, IBM Netezza, Teradata, ) Excellent written and verbal communication skills Innovative and assertive, with the ability to pick up new technologies Presence: enthusiastic and high energy, but also poised, confident and extremely professional Track record of delivering results in a dynamic start-up environment Experience working cross functionally, ideally with solution architects, technical writers, and support Strong sense of ownership and high attention to detail Candidates with degrees from fields such as Computer Science or Management Information Systems Comfortable with travel up to 75% of the time BONUS POINTS FOR EXPERIENCE WITH THE FOLLOWING: Experience with creating and delivering training programs to mass audiences Experience with other databases (e.g. Teradata, Netezza, Oracle, Redshift, ) Experience with non-relational platforms and tools for large-scale data processing (e.g. Hadoop, HBase, ) Familiarity and experience with common BI and data exploration tools (e.g. Microstrategy, Business Objects, Tableau, ) Experience and understanding of large-scale infrastructure-as-a-service platforms (e.g. Amazon AWS, Microsoft Azure, ) Experience with ETL pipelines tools Experience using AWS and Microsoft services Participated in Train the Trainer programs Proven success at enterprise software startups Snowflake is growing fast, and we re scaling our team to help enable and accelerate our growth. We are looking for people who share our values, challenge ordinary thinking, and push the pace of innovation while building a future for themselves and Snowflake. How do you want to make your impact For jobs located in the United States, please visit the job posting on the Snowflake Careers Site for salary and benefits information: careers.snowflake.com "

Posted 2 weeks ago

Apply

3.0 - 8.0 years

11 - 21 Lacs

Pune

Work from Office

Technical / Professional Skills ( Please provide at least 3) Handson development experience in Amdocs Turbo Charging and/or RLC(Rating Logic Configurator) or TC2X Thorough understanding on Charing Functionalities like Rating/Rerating, Guiding, Error Handling, In memory database and persistence Proficient in C++ and Java Expertise in PL/SQL, Linux, J2EE technologies, Shell Scripting and general Linux commands Experience in Developing and Maintaining Complex Webservices and Micro services. Ability to quickly learn and adapt to new requirements based on project needs Knowledge of at least one Cloud Service (AWS, GCP, Azure) Knowledge of: PostGres/Oracle SQL, Hbase, Liquibase, Dockers, Kafka, Kubernetes and Openshift Education and Qualifications Bachelors or Masters Degree in Computer Engineering, Telecommunications Engineering or Computer Science Work Experience 3-5 years industry experience building applications

Posted 2 weeks ago

Apply

5.0 - 10.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Job Title:AWS Data EngineerExperience5-10 YearsLocation:Bangalore : Technical Skills: 5 + Years of experience as AWS Data Engineer, AWS S3, Glue Catalog, Glue Crawler, Glue ETL, Athena write Glue ETLs to convert data in AWS RDS for SQL Server and Oracle DB to Parquet format in S3 Execute Glue crawlers to catalog S3 files. Create catalog of S3 files for easier querying Create SQL queries in Athena Define data lifecycle management for S3 files Strong experience in developing, debugging, and optimizing Glue ETL jobs using PySpark or Glue Studio. Ability to connect Glue ETLs with AWS RDS (SQL Server and Oracle) for data extraction and write transformed data into Parquet format in S3. Proficiency in setting up and managing Glue Crawlers to catalog data in S3. Deep understanding of S3 architecture and best practices for storing large datasets. Experience in partitioning and organizing data for efficient querying in S3. Knowledge of Parquet file format advantages for optimized storage and querying. Expertise in creating and managing the AWS Glue Data Catalog to enable structured and schema-aware querying of data in S3. Experience with Amazon Athena for writing complex SQL queries and optimizing query performance. Familiarity with creating views or transformations in Athena for business use cases. Knowledge of securing data in S3 using IAM policies, S3 bucket policies, and KMS encryption. Understanding of regulatory requirements (e.g., GDPR) and implementing secure data handling practices. Non-Technical Skills: Candidate needs to be Good Team Player Effective interpersonal, team building and communication skills. Ability to communicate complex technology to no tech audience in simple and precise manner.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies