Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3 - 5 years
5 - 7 Lacs
Mysore
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 2 months ago
2 - 3 years
4 - 8 Lacs
Bengaluru
Work from Office
Job ID/Reference Code INFSYS-NAUKRI-210683 Work Experience 2-3 Job Title Spark Developer Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to actively aid the consulting team in different phases of the project including problem definition, effort estimation, diagnosis, solution generation and design and deployment You will explore the alternatives to the recommended solutions based on research that includes literature surveys, information available in public domains, vendor evaluation information, etc. and build POCs You will create requirement specifications from the business needs, define the to-be-processes and detailed functional designs based on requirements. You will support configuring solution requirements on the products; understand if any issues, diagnose the root-cause of such issues, seek clarifications, and then identify and shortlist solution alternatives You will also contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional Requirements: Primary skills:Technology->Big Data - Data Processing->Spark Preferred Skills: Technology->Big Data - Data Processing->Spark Additional Responsibilities: Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BCom Service Line Data & Analytics Unit * Location of posting is subject to business requirements
Posted 2 months ago
3 - 5 years
4 - 8 Lacs
Bengaluru
Work from Office
Job ID/Reference Code INFSYS-NAUKRI-210690 Work Experience 3-5 Job Title Spark Developer Responsibilities Spark Expertise Expert proficiency in Spark Ability to design and implement efficient data processing workflows Experience with Spark SQL and DataFrames Good exposure to Big Data architectures and good understanding of Big Data eco system Experience with some framework building experience on Hadoop Good with DB knowledge with SQL tuning experience. Good to have experience with Python, APIs and exposure to Kafka. Technical and Professional Requirements: Primary skills:Technology->Big Data - Data Processing->Spark Preferred Skills: Technology->Big Data - Data Processing->Spark Additional Responsibilities: Ability to work with clients to identify business challenges and contribute to client deliverables by refining, analyzing, and structuring relevant data Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Educational Requirements MCA,MSc,Bachelor of Engineering,BBA,BCom,BSc Service Line Data & Analytics Unit * Location of posting is subject to business requirements
Posted 2 months ago
3 - 7 years
8 - 13 Lacs
Pune
Work from Office
About The Role : Job TitleJAVA Developer, AS LocationPune, India Role Description We are looking for a Java Developer to produce scalable software solutions on distributed systems like Hadoop using Spark Framework. Will be part of a cross-functional team thats responsible for the full software development life cycle, from conception to deployment. As a Developer, one should be comfortable around back-end coding, development frameworks, third party libraries and Spark APIs required for application development on distributed platform like Hadoop. Candidate should also be a team player with a knack for visual design and utility. Familiarity with Agile methodologies, it will be an added advantage. Large part of workloads and application would be cloud based so GCP knowledge and experience will be handy. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Work with development teams and product managers to ideate software solutions Design client-side and server-side architecture Build features and applications which are capable of running on distributed platforms and/or cloud. Develop and manage well-functioning applications which support micro-services architecture. Test software to ensure responsiveness and efficiency Troubleshoot, debug and upgrade software Create security and data protection settings Write technical & design documentation Write effective APIs (REST & SOAP) Your skills and experience Proven experience as a Java Developer or similar role as an individual contributor or development lead Familiarity with common stacks Strong Knowledge and working experience of Core Java, Spring Boot, Rest APIs, Spark API etc. is a must Knowledge of React framework and UI experience will be handy. Knowledge of Junit, Mockito, or any other framework(s) is a must. Familiarity with GCP services, design / architecture and security frameworks is an added advantage. Experiences with databases (e.g. Oracle, PostgreSQL, BigQuery) Familiar with developing on distributed application platform like Hadoop with Spark Excellent communication and teamwork skills Organizational skills An analytical mind Degree in Computer Science, Statistics or relevant field Experience working in Agile Good to have Knowledge of JavaScript frameworks (e.g. Angular, React, and Node.js) and UI/UX design Knowledge on Python would be a big plus. Knowledge on NoSQL databases like HBASE, MONGO. Experience 4-7 years of prior working experience in a global banking / insurance/financial organization. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.
Posted 2 months ago
6 - 10 years
5 - 9 Lacs
Bengaluru
Work from Office
Exp- 6+ Location- initially Bengaluru but willing to relocate to Poland after 4 months We need to fulfil few positions on Hadoop Admin. This particular position is for Poland, however, since we are not getting any good profiles in Poland location, customer has agreed to recruit some good candidates from Bengaluru location who will be traveling to Poland after working for 3-4 months from Bengaluru. Please find the detailed JD attached below: Hadoop administration Automation (Ansible, shell scripting or python scripting) DEVOPS skills (Should be able to code at least in one language preferably python The role involves performing Big Data Administration and Engineering activities on multiple open-source platforms such as Hadoop, Kafka, HBase, and Spark. The successful candidate will possess strong troubleshooting and debugging skills. The role involves planning and performing capacity expansions and upgrades in a timely manner to avoid any scaling issues and bugs. This includes automating repetitive tasks to reduce manual effort and prevent human errors. The successful candidate will tune alerting and set up observability to proactively identify issues and performance problems. They will also work closely with Level-3 teams in reviewing new use cases and cluster hardening techniques to build robust and reliable platforms. The role involves creating standard operating procedure documents and guidelines on effectively managing and utilizing the platforms. The person will leverage DevOps tools, disciplines (Incident, problem, and change management), and standards in day-to-day operations. The individual will ensure that the Hadoop platform can effectively meet performance and service level agreement requirements. They will also perform security remediation, automation, and self-healing as per the requirement. The individual will concentrate on developing automations and reports to minimize manual effort. This can be achieved through various automation tools such as Shell scripting, Ansible, or Python scripting, or by using any other programming language Hadoop Administration
Posted 2 months ago
6 - 10 years
10 - 14 Lacs
Mysore
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 2 months ago
8 - 12 years
27 - 32 Lacs
Kochi
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR.. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Re - write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 2 months ago
2 - 5 years
14 - 17 Lacs
Mysore
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 2 months ago
8 - 13 years
30 - 35 Lacs
Bengaluru
Work from Office
Qualitest India Private Limited is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 2 months ago
5 - 10 years
14 - 17 Lacs
Bengaluru
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"
Posted 2 months ago
5 - 10 years
14 - 17 Lacs
Hyderabad
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"
Posted 2 months ago
4 - 9 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 2 months ago
4 - 9 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers.
Posted 2 months ago
4 - 9 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 2 months ago
5 - 10 years
14 - 17 Lacs
Hyderabad
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"
Posted 2 months ago
4 - 9 years
12 - 16 Lacs
Pune
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 2 months ago
7 - 11 years
9 - 14 Lacs
Bengaluru
Work from Office
We are looking for lead or principal software engineers to join our Data Cloud team. Our Data Cloud team is responsible for the Zeta Identity Graph platform, which captures billions of behavioural, demographic, environmental, and transactional signals, for people-based marketing. As part of this team, the data engineer will be designing and growing our existing data infrastructure to democratize data access, enable complex data analyses, and automate optimization workflows for business and marketing operations. Essential Responsibilities: As a Lead or Principal Data Engineer, your responsibilities will include: Building, refining, tuning, and maintaining our real-time and batch data infrastructure Daily use technologies such as HDFS, Spark, Snowflake, Hive, HBase, Scylla, Django, FastAPI, etc. Maintaining data quality and accuracy across production data systems Working with Data Engineers to optimize data models and workflows Working with Data Analysts to develop ETL processes for analysis and reporting Working with Product Managers to design and build data products Working with our DevOps team to scale and optimize our data infrastructure Participate in architecture discussions, influence the road map, take ownership and responsibility over new projects Participating in 24/7 on-call rotation (be available by phone or email in case something goes wrong) . Desired Characteristics: Minimum 7 years of software engineering experience. Proven long term experience and enthusiasm for distributed data processing at scale, eagerness to learn new things. Expertise in designing and architecting distributed low latency and scalable solutions in either cloud and onpremises environment. Exposure to the whole software development lifecycle from inception to production and monitoring Fluency in Python or solid experience in Scala, Java Proficient with relational databases and Advanced SQL Expert in usage of services like Spark, HDFS, Hive, HBase Experience in adequate usage of any scheduler such as Apache Airflow, Apache Luigi, Chronos etc. Experience in adequate usage of cloud services (AWS) at scale Experience in agile software development processes Excellent interpersonal and communication skills Nice to have: Experience with large scale / multi-tenant distributed systems Experience with columnar / NoSQL databases Vertica, Snowflake, HBase, Scylla, Couchbase Experience in real team streaming frameworks Flink, Storm Experience with web frameworks such as Flask, Django .
Posted 2 months ago
4 - 7 years
5 - 9 Lacs
Pune
Work from Office
Job Description Job Title JAVA Developer, AS Location Pune, India Role Description We are looking for a Java Developer to produce scalable software solutions on distributed systems like Hadoop using Spark Framework. Will be part of a cross-functional team that s responsible for the full software development life cycle, from conception to deployment. As a Developer, one should be comfortable around back-end coding, development frameworks, third party libraries and Spark APIs required for application development on distributed platform like Hadoop. Candidate should also be a team player with a knack for visual design and utility. Familiarity with Agile methodologies, it will be an added advantage. Large part of workloads and application would be cloud based so GCP knowledge and experience will be handy. What we ll offer you As part of our flexible scheme, here are just some of the benefits that you ll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Work with development teams and product managers to ideate software solutions Design client-side and server-side architecture Build features and applications which are capable of running on distributed platforms and/or cloud. Develop and manage well-functioning applications which support micro-services architecture. Test software to ensure responsiveness and efficiency Troubleshoot, debug and upgrade software Create security and data protection settings Write technical & design documentation Write effective APIs (REST & SOAP) Your skills and experience Proven experience as a Java Developer or similar role - as an individual contributor or development lead Familiarity with common stacks Strong Knowledge and working experience of Core Java, Spring Boot, Rest APIs, Spark API etc. is a must Knowledge of React framework and UI experience will be handy. Knowledge of Junit, Mockito, or any other framework(s) is a must. Familiarity with GCP services, design / architecture and security frameworks is an added advantage. Experiences with databases (e. g. Oracle, PostgreSQL, BigQuery) Familiar with developing on distributed application platform like Hadoop with Spark Excellent communication and teamwork skills Organizational skills An analytical mind Degree in Computer Science, Statistics or relevant field Experience working in Agile Good to have Knowledge of JavaScript frameworks (e. g. Angular, React, and Node. js) and UI/UX design Knowledge on Python would be a big plus. Knowledge on NoSQL databases like HBASE, MONGO. Experience 4-7 years of prior working experience in a global banking / insurance/financial organization. How we ll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information https//www. db. com/company/company. htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively.
Posted 2 months ago
8 - 13 years
30 - 35 Lacs
Pune
Work from Office
Join us as a Data Engineer at Barclays where you will spearhead the evolution of our infrastructure and deployment pipelines, driving innovation and operational excellence. You will harness cutting-edge technology to build and manage robust, scalable and secure infrastructure, ensuring seamless delivery of our digital solutions. To be successful as a Data Engineer you should have experience with: Knowledge of distributed computing architecture, core hadoop component (HDFS, Spark, Yarn, Map-Reuduce, Hbase, HIVE, Impala) and Scala/Python. AWS or any Cloud related experience. Experience/Exposure to SQL, advanced SQL skills Technical Design and development of ETL/Hadoop and Analytics services /components You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. The role is based out of Pune. Purpose of the role To design, develop and improve software, utilising various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities Development and delivery of high-quality software solutions by using industry aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. Stay informed of industry technology trends and innovations and actively contribute to the organization s technology communities to foster a culture of technical excellence and growth. Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L - Listen and be authentic, E - Energise and inspire, A - Align across the enterprise, D - Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation.
Posted 2 months ago
10 - 15 years
35 - 40 Lacs
Bengaluru
Work from Office
Role Overview: The Big Data Architect will be responsible for the design, implementation, and management of the organizations big data infrastructure. The ideal candidate will have a strong technical background in big data technologies, excellent problem-solving skills, and the ability to work in a fast-paced environment. The role requires a deep understanding of data architecture, data modeling, and data integration techniques. About the Role: Design and implement scalable and efficient big data architecture solutions to meet business requirements. Develop and maintain data pipelines, ensuring the availability and quality of data. Collaborate with data scientists, data engineers, and other stakeholders to understand data needs and provide technical solutions. Lead the evaluation and selection of big data tools and technologies. Ensure data security and privacy compliance. Optimize and tune big data systems for performance and cost-efficiency. Document data architecture, data flows, and processes. Stay up-to-date with the latest industry trends and best practices in big data technologies. About You: Bachelors or Masters degree in Computer Science, Information Technology, or a related field. over all 10+ years exp with 5+ years of experience in big data architecture and engineering. Proficiency in big data technologies such as Hadoop mapredue, Spark batch and streaming, Kafka, HBase, Scala, Elastic Search and others. Experience with AWS cloud platform. Strong knowledge of data modeling, ETL processes, and data warehousing. Proficiency in programming languages such as Java, Scala, Spark Familiarity with data visualization tools and techniques. Excellent communication and collaboration skills. Strong problem-solving abilities and attention to detail.
Posted 2 months ago
6 - 10 years
8 - 12 Lacs
Mysore
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 2 months ago
4 - 9 years
10 - 20 Lacs
Pune
Work from Office
Design, develop, maintain efficient data processing pipelines using PySpark. Implement best practices for ETL processes, ensuring high-quality & secure data. Monitor, troubleshoot, resolve issues related to data pipelines & infrastructure. Required Candidate profile Exp in PySpark & Python. Exp with big data frameworks like Hadoop, Spark, or Kafka. Exp in working with cloud platforms such as AWS , Azure, or GCP. Exp with data modeling & working with databases .
Posted 2 months ago
5 - 10 years
10 - 20 Lacs
Ahmedabad, Bengaluru, Hyderabad
Work from Office
Skills : Big Data, Spark, HBase Experience : 5-16 Yrs Education : BE/Btech/ME/MTech / MCA Location : PAN INDIA
Posted 2 months ago
5 - 10 years
0 - 1 Lacs
Pune
Work from Office
Position Overview : Cloud Data Engineer with expertise in Google Cloud Platform (GCP) Data Stack , including Event Hub, MS SQL DB, Azure Redis, and GCP Big Table Storage . The ideal candidate should have strong experience in Big Data architecture, data migration, and large-scale data processing using tools like Hadoop, Hive, HDFS, Impala, Spark, MapReduce, MS SQL, Kafka, and Redis . Familiarity with Cloudera, HBase, MongoDB, MariaDB, Python Scripts, and Unix Shell Scripting is a plus. Key Responsibilities: Design, develop, and optimize Big Data solutions on GCP and cloud-based architectures . Lead and execute data migration projects from on-premise systems to GCP or hybrid cloud environments . Build and maintain ETL pipelines using Hadoop, Hive, Spark, Kafka, and SQL databases . Work with GCP Big Table Storage, Azure Redis, and Event Hub for data processing and storage. Implement real-time streaming solutions using Kafka and Event Hub . Optimize performance and security for Hadoop clusters, HDFS, and cloud storage solutions . Develop and automate Python and Unix Shell Scripts for data processing and workflow orchestration. Collaborate with data analysts, data scientists, and DevOps teams to improve data infrastructure. Required Skills & Experience: 5-10 years of experience as Cloud Data Engineer Strong hands-on experience in GCP Data Stack (Big Table Storage, Event Hub, Azure Redis, MS SQL DB). Proficiency in Big Data technologies (Hadoop, Hive, HDFS, Impala, Spark, MapReduce). Experience with Kafka, Redis, and real-time data processing . Hands-on knowledge of SQL and NoSQL databases (MS SQL, HBase, MongoDB, MariaDB). Experience in data migration projects across cloud and on-premise environments. Strong scripting skills in Python and Unix Shell Scripting . Understanding Big Data security, performance tuning, and scalability best practices . Location: Koregaon Park, Pune, Maharashtra (India) Shift Timings: USA Time Zone (06:30 PM IST to 03:30 AM IST)
Posted 2 months ago
5 - 10 years
0 - 1 Lacs
Pune
Work from Office
Position Overview: Cloud Architect with expertise in Hadoop and Google Cloud Platform (GCP) Data Stack , along with experience in Big Data Architecture and Migration . The ideal candidate should have strong proficiency in GCP Big Data tools , including Hadoop, Hive, HDFS, Impala, Spark, MapReduce, MS SQL, Kafka, and Redis . Familiarity with Cloudera, HBase, MongoDB, MariaDB, and Event Hub is a plus. Key Responsibilities: Design, implement, and optimize Big Data architecture on GCP, and Hadoop ecosystems . Lead data migration projects from on-premise to cloud platforms (GCP). Develop and maintain ETL pipelines using tools like Spark, Hive, and Kafka . Manage Hadoop clusters, HDFS, and related components . Work with data streaming technologies like Kafka and Event Hub for real-time data processing. Optimize SQL and NoSQL databases (MS SQL, Redis, MongoDB, MariaDB, HBase) for high availability and scalability. Collaborate with data scientists, analysts, and DevOps teams to integrate Big Data solutions. Ensure data security, governance, and compliance in cloud and on-premise environments. Required Skills & Experience: 5-10 years of experience as Cloud Architect Strong expertise in Hadoop (HDFS, Hive, Impala, Spark, MapReduce) Hands-on experience with GCP Big Data Services Proficiency in MS SQL, Kafka, Redis for data processing and analytics Experience with Cloudera, HBase, MongoDB, and MariaDB Knowledge of real-time data streaming and event-driven architectures Understanding Big Data security and performance optimization Ability to design and execute data migration strategies Location : Koregaon Park, Pune, Maharashtra (India) Shift Timings : USA Time Zone (06:30 PM IST to 03:30 AM IST)
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
HBase is a distributed, scalable, and NoSQL database that is commonly used in big data applications. As the demand for big data solutions continues to grow, so does the demand for professionals with HBase skills in India. Job seekers looking to explore opportunities in this field can find a variety of roles across different industries and sectors.
These cities are known for their strong presence in the IT industry and are actively hiring professionals with HBase skills.
The salary range for HBase professionals in India can vary based on experience and location. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.
In the HBase domain, a typical career progression may look like: - Junior HBase Developer - HBase Developer - Senior HBase Developer - HBase Architect - HBase Administrator - HBase Consultant - HBase Team Lead
In addition to HBase expertise, professionals in this field are often expected to have knowledge of: - Apache Hadoop - Apache Spark - Data Modeling - Java programming - Database design - Linux/Unix
As you prepare for HBase job opportunities in India, make sure to brush up on your technical skills, practice coding exercises, and be ready to showcase your expertise in interviews. With the right preparation and confidence, you can land a rewarding career in the exciting field of HBase. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2