Jobs
Interviews

552 Hbase Jobs - Page 10

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

7 - 10 Lacs

Pune

Work from Office

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5-8 years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in Data Bricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Knowledge or experience of Snowflake will be an added advantage

Posted 4 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala, Python HBase, Hive Good to have Aws -S3, Athena, Dynamo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark Data Frames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 4 weeks ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Mumbai

Work from Office

Role Overview: Lead the architectural design and implementation of a secure, scalable Cloudera-based Data Lakehouse for one of India’s top public sector banks. Key Responsibilities: * Design end-to-end Lakehouse architecture on Cloudera * Define data ingestion, processing, storage, and consumption layers * Guide data modeling, governance, lineage, and security best practices * Define migration roadmap from existing DWH to CDP * Lead reviews with client stakeholders and engineering teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Proven experience with Cloudera CDP, Spark, Hive, HDFS, Iceberg * Deep understanding of Lakehouse patterns and data mesh principles * Familiarity with data governance tools (e.g., Apache Atlas, Collibra) * Banking/FSI domain knowledge highly desirable.

Posted 4 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru

Work from Office

YOUR IMPACT Are you passionate about developing mission-critical, high quality software solutions, using cutting-edge technology, in a dynamic environment? We are Compliance Engineering, a global team of more than 300 engineers and scientists who work on the most complex, mission-critical problems. We: build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. have access to the latest technology and to massive amounts of structured and unstructured data. leverage modern frameworks to build responsive and intuitive UX/UI and Big Data applications. Compliance Engi neering is looking to fill several big data software engineering roles Your first deliverable and success criteria will be the deployment, in 2025, of new complex data pipelines and surveillance models to detect inappropriate trading activity. HOW YOU WILL FULFILL YOUR POTENTIAL As a member of our team, you will: partner globally with sponsors, users and engineering colleagues across multiple divisions to create end-to-end solutions, learn from experts, leverage various technologies including; Java, Spark, Hadoop, Flink, MapReduce, HBase, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes be able to innovate and incubate new ideas, have an opportunity to work on a broad range of problems, including negotiating data contracts, capturing data quality metrics, processing large scale data, building surveillance detection models, be involved in the full life cycle; defining, designing, implementing, testing, deploying, and maintaining software systems across our products. QUALIFICATIONS A successful candidate will possess the following attributes: A Bachelors or Masters degree in Computer Science, Computer Engineering, or a similar field of study. Expertise in java, as well as proficiency with databases and data manipulation. Experience in end-to-end solutions, automated testing and SDLC concepts. The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Experience in the some of following is desired and can set you apart from other candidates : developing in large-scale systems, such as MapReduce on Hadoop/Hbase, data analysis using tools such as SQL, Spark SQL, Zeppelin/Jupyter, API design, such as to create interconnected services, knowledge of the financial industry and compliance or risk functions, ability to influence stakeholders.

Posted 1 month ago

Apply

5.0 - 10.0 years

25 - 37 Lacs

Pune

Work from Office

Mandatory Skills: PySpark Big Data Technologies Role Overview: Synechron is hiring a skilled PySpark Developer for its advanced data engineering team in Pune. The ideal candidate will have strong experience in building scalable data pipelines and solutions using PySpark, with a solid understanding of Big Data ecosystems. Key Responsibilities: Design, build, and maintain high-performance batch and streaming data pipelines using PySpark. Work with large-scale data processing frameworks and big data tools. Optimize and troubleshoot PySpark jobs for efficient performance. Collaborate with data scientists, analysts, and architects to translate business needs into technical solutions. Ensure best practices in code quality, version control, and documentation. Preferred Qualifications: Hands-on experience with Big Data tools like Hive, HDFS, or HBase. Exposure to cloud-based data services (AWS, Azure, or GCP). Familiarity with workflow orchestration tools like Airflow or Oozie. Strong analytical, problem-solving, and communication skills. Educational Qualification: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.

Posted 1 month ago

Apply

10.0 - 15.0 years

50 - 80 Lacs

Bengaluru

Work from Office

Position Summary... What youll do... About Team: This role will be focused on the Marketplace Risk ; Fraud Engineering. What youll do: Understand business problems and suggest technology solutions. Architect, design, build and deploy technology solutions at scale Raise the bar on sustainable engineering by improving best practices, producing best in class of code, documentation, testing and monitoring. Estimate effort, identify risks and plan execution. Mentor/coach other engineers in the team to facilitate their development and to provide technical leadership to them. R ise above details as and when needed to spot broader issues/trends and implications for the product/team as a whole. What youll bring: 10+ years of experience in design and development of highly -scalable applications development in product based companies or R;D divisions. Strong computer systems fundamentals, DS/Algorithms and problem solving skills 5+ years of experience building microservices using JAVA Strong experience with SQL /No-SQL and database technologies (MySQL, Mongo DB, Hbase, Cassandra, Oracle, Postgresql) Experience in systems design and distributed systems. Large scale distributed services experience, including scalability and fault tolerance. Excellent organization, communication and interpersonal skills About Walmart Global Tech . . Flexible, hybrid work . Benefits . Belonging . . Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer - By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions - while being inclusive of all people. Minimum Qualifications... Minimum Qualifications:Option 1: Bachelors degree in computer science, computer engineering, computer information systems, software engineering, or related area and 4 years experience in software engineering or related area.Option 2: 6 years experience in software engineering or related area. Preferred Qualifications... Master s degree in Computer Science, Computer Engineering, Computer Information Systems, Software Engineering, or related area and 2 years experience in software engineering or related area Primary Location... BLOCK- 1, PRESTIGE TECH PACIFIC PARK, SY NO. 38/1, OUTER RING ROAD KADUBEESANAHALLI, , India

Posted 1 month ago

Apply

8.0 - 13.0 years

9 - 13 Lacs

Bengaluru

Work from Office

As a Technical Specialist, you will develop and enhance Optical Network Management applications, leveraging experience in Optical Networks. You will work with fault supervision, and performance monitoring. Collaborating in an agile environment, you will drive innovation, optimize efficiency, and explore UI technologies like React. Your role will focus on designing, coding, testing, and improving network management applications to enhance functionality and customer satisfaction. You have: Bachelor's degree and 8 years of experience (or equivalent) in Optics Network. Hands-on working experience with CORE JAVA, Spring, Kafka, Zookeeper, Hibernate, and Python. Working knowledge of RDBMS, PL-SQL, Linux, Docker, and database concepts. Exposure to UI technologies like REACT. It would be nice if you also had: Domain knowledge in OTN, Photonic network management. Strong communication skills and the ability to manage complex relationships. Develop software for Network Management of Optics Division products, including Photonic/WDM, Optical Transport, SDH, and SONET. Enable user control over network configuration through Optics Network Management applications. Utilize Core Java, Spring, Kafka, Python, and RDBMS to build high-performing solutions for network configuration. Interface Optics Network Management applications with various Network Elements, providing a user-friendly graphical interface and implementing algorithms to simplify network management and reduce OPEX. Deploy Optics Network Management applications globally, supporting hundreds of installations for customers. Contribute to new developments and maintain applications as part of the development team, focusing on enhancing functionality and customer satisfaction.

Posted 1 month ago

Apply

5.0 - 8.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys consulting team, your primary role would be to get to the heart of customer issues, diagnose problem areas, design innovative solutions and facilitate deployment resulting in client delight. You will develop a proposal by owning parts of the proposal document and by giving inputs in solution design based on areas of expertise. You will plan the activities of configuration, configure the product as per the design, conduct conference room pilots and will assist in resolving any queries related to requirements and solution design You will conduct solution/product demonstrations, POC/Proof of Technology workshops and prepare effort estimates which suit the customer budgetary requirements and are in line with organization’s financial guidelines Actively lead small projects and contribute to unit-level and organizational initiatives with an objective of providing high quality value adding solutions to customers. If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Ability to develop value-creating strategies and models that enable clients to innovate, drive growth and increase their business profitability Good knowledge on software configuration management systems Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Understanding of the financial processes for various types of projects and the various pricing models available Ability to assess the current processes, identify improvement areas and suggest the technology solutions One or two industry domain knowledge Client Interfacing skills Project and Team management Technical and Professional : Primary skills:Technology-Big Data - Data Processing-Spark Preferred Skills: Technology-Big Data - Data Processing-Spark

Posted 1 month ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional : Primary skillsTechnology-Big Data - Data Processing-Map Reduce Preferred Skills: Technology-Big Data - Data Processing-Map Reduce

Posted 1 month ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering,BCA,BTech,MTech,MSc,MCA Service Line Strategic Technology Group Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional : Primary skills:Technology-Functional Programming-Scala - Bigdata Preferred Skills: Technology-Functional Programming-Scala

Posted 1 month ago

Apply

2.0 - 7.0 years

5 - 9 Lacs

Pune

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Additional Responsibilities: Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Technical and Professional : Primary skillsHadoop, Hive, HDFS Preferred Skills: Technology-Big Data - Hadoop-Hadoop

Posted 1 month ago

Apply

5.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Educational Bachelor of Engineering,BCA,BSc,MCA,MTech,MSc Service Line Data & Analytics Unit Responsibilities "1. 5-8 yrs exp in Azure (Hands on experience in Azure Data bricks and Azure Data Factory)2. Good knowledge in SQL, PySpark.3. Should have knowledge in Medallion architecture pattern4. Knowledge on Integration Runtime5. Knowledge on different ways of scheduling jobs via ADF (Event/Schedule etc)6. Should have knowledge of AAS, Cubes.7. To create, manage and optimize the Cube processing.8. Good Communication Skills.9. Experience in leading a team" Additional Responsibilities: Good knowledge on software configuration management systems Strong business acumen, strategy and cross-industry thought leadership Awareness of latest technologies and Industry trends Logical thinking and problem solving skills along with an ability to collaborate Two or three industry domain knowledge Understanding of the financial processes for various types of projects and the various pricing models available Client Interfacing skills Knowledge of SDLC and agile methodologies Project and Team management Preferred Skills: Technology-Big Data - Data Processing-Spark

Posted 1 month ago

Apply

5.0 - 7.0 years

5 - 5 Lacs

Kochi, Hyderabad, Thiruvananthapuram

Work from Office

Key Responsibilities Develop & Deliver: Build applications/features/components as per design specifications, ensuring high-quality code adhering to coding standards and project timelines. Testing & Debugging: Write, review, and execute unit test cases; debug code; validate results with users; and support defect analysis and mitigation. Technical Decision Making: Select optimal technical solutions including reuse or creation of components to enhance efficiency, cost-effectiveness, and quality. Documentation & Configuration: Create and review design documents, templates, checklists, and configuration management plans; ensure team compliance. Domain Expertise: Understand customer business domain deeply to advise developers and identify opportunities for value addition; obtain relevant certifications. Project & Release Management: Manage delivery of modules/user stories, estimate efforts, coordinate releases, and ensure adherence to engineering processes and timelines. Team Leadership: Set goals (FAST), provide feedback, mentor team members, maintain motivation, and manage people-related issues effectively. Customer Interaction: Clarify requirements, present design options, conduct demos, and build customer confidence through timely and quality deliverables. Technology Stack: Expertise in Big Data technologies (PySpark, Scala), plus preferred skills in AWS services (EMR, S3, Glue, Airflow, RDS, DynamoDB), CICD tools (Jenkins), relational & NoSQL databases, microservices, and containerization (Docker, Kubernetes). Soft Skills & Collaboration: Communicate clearly, work under pressure, handle dependencies and risks, collaborate with cross-functional teams, and proactively seek/offers help. Required Skills Big Data,Pyspark,Scala Additional Comments: Must-Have Skills Big Data (Py Spark + Java/Scala) Preferred Skills: AWS (EMR, S3, Glue, Airflow, RDS, Dynamodb, similar) CICD (Jenkins or another) Relational Databases experience (any) No SQL databases experience (any) Microservices or Domain services or API gateways or similar Containers (Docker, K8s, similar)

Posted 1 month ago

Apply

4.0 - 8.0 years

11 - 16 Lacs

Mumbai, Chennai, Bengaluru

Work from Office

Your Role Should have extensively worked onMetadata, Rules & Memberlists inHFM. VB Scripting knowledge is mandatory. Understand and communicatethe consequences of changes made. Should have worked on Monthly/Quarterly/Yearly Validations. Should have worked on ICP accounts, Journals and Intercompany Reports. Should have worked on Data Forms & Data Grids. Should able to work on FDMEE Mappings. Should be fluent with FDMEE Knowledge. Should have worked on Financial Reporting Studio Your Profile Performing UAT with business on the CR's. Should have a to resolve business about theirHFMqueries(if any). Agile process knowledge will be an added advantage What youll love about working here You can shape yourcareer with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have theopportunity to learn on one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications. Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you canbring your original self to work . Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internalsports events , yoga challenges, or marathons. At Capgemini, you can work oncutting-edge projects in tech and engineering with industry leaders or create solutions to overcome societal and environmental challenges. About Capgemini Location - Bengaluru,Chennai,Mumbai,Pune

Posted 1 month ago

Apply

6.0 - 9.0 years

15 - 30 Lacs

Bengaluru

Work from Office

About the Team As the Supplier Growth & Marketplace team, we own the technology platform to enable suppliers to onboard, create listings for their products, and start selling them on Meeshos marketplace. As Software Development Engineer - IV, youll help us in our mission to enable Meesho to have the simplest seller experience across all e-commerce platforms?? To enable this, we own 10+ microservices that interact with over 30 other services across Meeshos technology stack. They support 150K+ TPS, 5K+ messages per second in our Kafka queues, 300M+ records in our data cluster, and 80M+ indexed entries in our Elasticsearch engine. We maintain these services with an uptime SLA of 99.995% and average API latency of Our focus now is to rearchitect some of our core services to support our explosive expansion. Our services include cutting-edge technologies such as Apache Spark, HBase, and Clustered Redis. We continuously innovate on our platform by building and evangelising new in-house frameworks such as the micro-frontend architecture within the Meesho tech community. We place special emphasis on the continuous growth of each team member, and we do this with regular 1-1s and open communication. We also know how to party as hard as we work!?? When we arent building unparalleled tech solutions, you can find us debating the plot points of our favourite books and games, or gossiping over chai? So, if a day filled with building impactful solutions with a fun team sounds appealing to you, come join us! About the Role We are looking for an experienced Software Development Engineer - IV (Backend), who will create prototypes and proofs-of-concept for iterative development in Java Additionally, in this role, you will be responsible for converting design into code fluently. The cherry on top? Youll be part of a team that will help you upskill and grow in your career. Safe to say, an exciting and rewarding journey awaits you in this role. What you will do Focus on scalability, performance, service robustness, and cost trade-offs Have a continuous drive to explore, improve, enhance, automate, and optimise systems and tools to best meet evolving business and market needs Pay attention to detail and think abstractly Collaborate with teams to develop and support the smooth 24x7 operation of our service Create prototypes and proofs-of-concept for iterative development Take complete ownership of projects and their development cycle What you will need BTech, preferably from premier institutions 6-9 years of relevant experience working as a Software Development Engineer Strong knowledge of any of the databases like MySQL, NoSQL, SQL Server, Oracle, PostgreSQL Experience in Java and web technologies Experience in scripting languages like Python, PHP, etc. Hands-on experience with systems that are asynchronous, RESTful and demand concurrency Knowledge of best practices for all stages of software development including coding standards, code reviews, testing and deployment

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Bengaluru

Work from Office

About the Team When 5% of Indian households shop with us, its important to build resilient systems to manage millions of orders every day. Weve done this with zero downtime! ?? Sounds impossible? Well, thats the kind of Engineering muscle that has helped Meesho become the e-commerce giant that it is today. We value speed over perfection and see failures as opportunities to become better. Weve taken steps to inculcate a strong Founders Mindset across our engineering teams, making us grow and move fast. We place special emphasis on the continuous growth of each team member - and we do this with regular 1-1s and open communication. As a Database Engineer II, you will be part of self-starters who thrive on teamwork and constructive feedback. We know how to party as hard as we work! If we arent building unparalleled tech solutions, you can find us debating the plot points of our favorite books and games or even gossiping over chai. So, if a day filled with building impactful solutions with a fun team sounds appealing to you, join us. About the Role As a Database Engineer II, youll establish and implement the best Nosql Database Engineering practices proactively. Youll have opportunities to work on different Nosql technologies on a large scale. Youll also work closely with other engineering teams and establish seamless collaborations within the organization. Being proficient in emerging technologies and the ability to work successfully with a team is key to success in this role. What you will do Manage, maintain and monitor a multitude of Relational/NoSQL databases clusters, ensuring obligations to SLAs. Manage both in-house and SaaS solutions in the Public cloud (Or 3rd party).Diagnose, mitigate and communicate database-related issues to relevant stakeholders. Design and Implement best practices for planning, provisioning, tuning, upgrading and decommissioning of database clusters. Understand the cost optimization aspects of such tools/softwares and implement cost control mechanisms along with continuous improvement. Advice and support product, engineering and operations teams. Maintain general backup/recovery/DR of data solutions. Work with the engineering and operations team to automate new approaches for scalability, reliability and performance. Perform R&D on new features and for innovative solutions. Participate in on-call rotations. What you will need 5 years+ experience in provisioning & managing Relational/NoSQL databases. Proficiency in two or more: Mysql,PostgreSql, Big Table ,Elastic Search, MongoDB, Redis, ScyllaDB. Proficiency in Python programming language. Experience with deployment orchestration, automation, and security configuration management (Jenkins, Terraform, Ansible). Hands-on experience with Amazon Web Services (AWS)/ Google Cloud Platform (GCP).Comfortable working in Linux/Unix environments. Knowledge of TCP/IP stack, Load balancer, Networking. Proven ability to drive projects to completion. A degree in computer science, software engineering, information technology or related fields will be an advantage.

Posted 1 month ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Pune, Bengaluru

Work from Office

Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad, Bengaluru

Work from Office

Job Summary Synechron is seeking an experienced Big Data Developer with strong expertise in Spark, Scala, and Python to lead and contribute to large-scale data projects. The role involves designing, developing, and implementing robust data solutions that leverage emerging technologies to enhance business insights and operational efficiency. The successful candidate will play a key role in driving innovation, mentoring team members, and ensuring the delivery of high-quality data products aligned with organizational objectives. Software Requirements Required: Apache Spark (latest stable version) Scala (version 2.12 or higher) Python (version 3.6 or higher) Big Data tools and frameworks supporting Spark and Scala Preferred: Cloud platforms such as AWS, Azure, or GCP for data deployment Data processing or orchestration tools like Kafka, Hadoop, or Airflow Data visualization tools for data insights Overall Responsibilities Lead the development and implementation of data pipelines and solutions using Spark, Scala, and Python Collaborate with business and technology teams to understand data requirements and translate them into scalable solutions Mentor and guide junior team members on best practices in big data development Evaluate and recommend new technologies and tools to improve data processing and quality Stay informed about industry trends and emerging technologies relevant to big data and analytics Ensure timely delivery of data projects with high standards of quality, performance, and security Lead technical reviews, code reviews, and provide inputs to improve overall development standards and practices Contribute to architecture design discussions and assist in establishing data governance standards Technical Skills (By Category) Programming Languages: Essential: Spark (Scala), Python Preferred: Knowledge of Java or other JVM languages Data Management & Databases: Experience with distributed data storage solutions (HDFS, S3, etc.) Familiarity with NoSQL databases (e.g., Cassandra, HBase) and relational databases for data integration Cloud Technologies: Preferred: Cloud platforms (AWS, Azure, GCP) for data processing, storage, and deployment Frameworks & Libraries: Spark MLlib, Spark SQL, Spark Streaming Data processing libraries in Python (pandas, PySpark) Development Tools & Methodologies: Version control (Git, Bitbucket) Agile methodologies (Scrum, Kanban) Data pipeline orchestration tools (Apache Airflow, NiFi) Security & Compliance: Understanding of data security best practices and data privacy regulations Experience Requirements 5 to 10 years of hands-on experience in big data development and architecture Proven experience in designing and developing large-scale data pipelines using Spark, Scala, and Python Demonstrated ability to lead technical projects and mentor team members Experience working with cross-functional teams including data analysts, data scientists, and business stakeholders Track record of delivering scalable, efficient, and secure data solutions in complex environments Day-to-Day Activities Develop, test, and optimize scalable data pipelines using Spark, Scala, and Python Collaborate with data engineers, analysts, and stakeholders to gather requirements and translate into technical solutions Lead code reviews, mentor junior team members, and enforce coding standards Participate in architecture design and recommend best practices in big data development Monitor data workflows performance and troubleshoot issues to ensure data quality and reliability Stay updated with industry trends and evaluate new tools and frameworks for potential implementation Document technical designs, data flows, and implementation procedures Contribute to continuous improvement initiatives to optimize data processing workflows Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field Relevant certifications in cloud platforms, big data, or programming languages are advantageous Continuous learning on innovative data technologies and frameworks Professional Competencies Strong analytical and problem-solving skills with a focus on scalable data solutions Leadership qualities with the ability to guide and mentor team members Excellent communication skills to articulate technical concepts to diverse audiences Ability to work collaboratively in cross-functional teams and fast-paced environments Adaptability to evolving technologies and industry trends Strong organizational skills for managing multiple projects and priorities

Posted 1 month ago

Apply

5.0 - 10.0 years

16 - 20 Lacs

Pune

Work from Office

Job Title: Senior / Lead Data Engineer Company: Synechron Technologies Locations: Pune or Chennai Experience: 5 to 12 years : Synechron Technologies is seeking an accomplished Senior or Lead Data Engineer with expertise in Java and Big Data technologies. The ideal candidate will have a strong background in Java Spark, with extensive experience working with big data frameworks such as Spark, Hadoop, HBase, Couchbase, and Phoenix. You will lead the design and development of scalable data solutions, ensuring efficient data processing and deployment in a modern technology environment. Key Responsibilities: Lead the development and optimization of large-scale data pipelines using Java and Spark. Design, implement, and maintain data infrastructure leveraging Spark, Hadoop, HBase, Couchbase, and Phoenix. Collaborate with cross-functional teams to gather requirements and develop robust data solutions. Lead deployment automation and management using CI/CD tools including Jenkins, Bitbucket, GIT, Docker, and OpenShift. Ensure the performance, security, and reliability of data processing systems. Provide technical guidance to team members and participate in code reviews. Stay updated on emerging technologies and leverage best practices in data engineering. Qualifications & Skills: 5 to 14 years of experience as a Data Engineer or similar role. Strong expertise in Java programming and Apache Spark. Proven experience with Big Data technologiesSpark, Hadoop, HBase, Couchbase, and Phoenix. Hands-on experience with CI/CD toolsJenkins, Bitbucket, GIT, Docker, OpenShift. Solid understanding of data modeling, ETL workflows, and data architecture. Excellent problem-solving, communication, and leadership skills. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice

Posted 1 month ago

Apply

2.0 - 6.0 years

6 - 10 Lacs

Nagpur

Work from Office

Primine Software Private Limited is looking for BigData Engineer to join our dynamic team and embark on a rewarding career journey Develop and maintain big data solutions. Collaborate with data teams and stakeholders. Conduct data analysis and processing. Ensure compliance with big data standards and best practices. Prepare and maintain big data documentation. Stay updated with big data trends and technologies.

Posted 1 month ago

Apply

3.0 - 6.0 years

8 - 13 Lacs

Hyderabad

Work from Office

Some careers shine brighter than others. If you re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Design and Develop ETL Processes: Lead the design and implementation of ETL processes using all kinds of batch/streaming tools to extract, transform, and load data from various sources into GCP. Collaborate with stakeholders to gather requirements and ensure that ETL solutions meet business needs. Data Pipeline Optimization: Optimize data pipelines for performance, scalability, and reliability, ensuring efficient data processing workflows. Monitor and troubleshoot ETL processes, proactively addressing issues and bottlenecks. Data Integration and Management: Integrate data from diverse sources, including databases, APIs, and flat files, ensuring data quality and consistency. Manage and maintain data storage solutions in GCP (e. g. , BigQuery, Cloud Storage) to support analytics and reporting. GCP Dataflow Development: Write Apache Beam based Dataflow Job for data extraction, transformation, and analysis, ensuring optimal performance and accuracy. Collaborate with data analysts and data scientists to prepare data for analysis and reporting. Automation and Monitoring: Implement automation for ETL workflows using tools like Apache Airflow or Cloud Composer, enhancing efficiency and reducing manual intervention. Set up monitoring and alerting mechanisms to ensure the health of data pipelines and compliance with SLAs. Data Governance and Security: Apply best practices for data governance, ensuring compliance with industry regulations (e. g. , GDPR, HIPAA) and internal policies. Collaborate with security teams to implement data protection measures and address vulnerabilities. Documentation and Knowledge Sharing: Document ETL processes, data models, and architecture to facilitate knowledge sharing and onboarding of new team members. Conduct training sessions and workshops to share expertise and promote best practices within the team. Requirements To be successful in this role, you should meet the following requirements: Experience: Minimum of 5 years of industry experience in data engineering or ETL development, with a strong focus on Data Stage and GCP. Proven experience in designing and managing ETL solutions, including data modeling, data warehousing, and SQL development. Technical Skills: Strong knowledge of GCP services (e. g. , BigQuery, Dataflow, Cloud Storage, Pub/Sub) and their application in data engineering. Experience of cloud-based solutions, especially in GCP, cloud certified candidate is preferred. Experience and knowledge of Bigdata data processing in batch mode and streaming mode, proficient in Bigdata eco systems, e. g. Hadoop, HBase, Hive, MapReduce, Kafka, Flink, Spark, etc. Familiarity with Java & Python for data manipulation on Cloud/Bigdata platform. Analytical Skills: Strong problem-solving skills with a keen attention to detail. Ability to analyze complex data sets and derive meaningful insights.

Posted 1 month ago

Apply

6.0 - 9.0 years

5 - 9 Lacs

Hyderabad

Work from Office

We are looking for a highly skilled Data Engineer with 6 to 9 years of experience to join our team at BlackBaud, located in [location to be specified]. The ideal candidate will have a strong background in data engineering and excellent problem-solving skills. Roles and Responsibility Design, develop, and implement data pipelines and architectures to support business intelligence and analytics. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems, ensuring scalability, reliability, and performance. Troubleshoot and resolve complex technical issues related to data engineering projects. Participate in code reviews and contribute to the improvement of the overall code quality. Stay up-to-date with industry trends and emerging technologies in data engineering. Job Requirements Strong understanding of data modeling, database design, and data warehousing concepts. Experience with big data technologies such as Hadoop, Spark, and NoSQL databases. Excellent programming skills in languages like Java, Python, or Scala. Strong analytical and problem-solving skills, with attention to detail and ability to work under pressure. Good communication and collaboration skills, with the ability to work effectively in a team environment. Ability to adapt to changing priorities and deadlines in a fast-paced IT Services & Consulting environment.

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Bengaluru

Work from Office

What this job involves: JLL, an international real estate management company, is seeking an Data Engineer to join our JLL Technologies Team. We are seeking candidates that are self-starters to work in a diverse and fast-paced environment that can join our Enterprise Data team. We are looking for a candidate that is responsible for designing and developing of data solutions that are strategic for the business using the latest technologies Azure Databricks, Python, PySpark, SparkSQL, Azure functions, Delta Lake, Azure DevOps CI/CD. Responsibilities Design, Architect, and Develop solutions leveraging cloud big data technology to ingest, process and analyze large, disparate data sets to exceed business requirements. Design & develop data management and data persistence solutions for application use cases leveraging relational, non-relational databases and enhancing our data processing capabilities. Develop POCs to influence platform architects, product managers and software engineers to validate solution proposals and migrate. Develop data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to modern technology platform. Contribute and adhere to CI/CD processes, development best practices and strengthen the discipline in Data Engineering Org. Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data. Using PySpark and Spark SQL, extract, manipulate, and transform data from various sources, such as databases, data lakes, APIs, and files, to prepare it for analysis and modeling. Build and optimize ETL workflows using Azure Databricks and PySpark. This includes developing efficient data processing pipelines, data validation, error handling, and performance tuning. Perform the unit testing, system integration testing, regression testing and assist with user acceptance testing. Articulates business requirements in a technical solution that can be designed and engineered. Consults with the business to develop documentation and communication materials to ensure accurate usage and interpretation of JLL data. Implement data security best practices, including data encryption, access controls, and compliance with data protection regulations. Ensure data privacy, confidentiality, and integrity throughout the data engineering processes. Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Experience & Education Minimum of 4 years of experience as a data developer using Python, PySpark, Spark Sql, ETL knowledge, SQL Server, ETL Concepts. Bachelors degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Experience in Azure Cloud Platform, Databricks, Azure storage. Effective written and verbal communication skills, including technical writing. Excellent technical, analytical and organizational skills. Technical Skills & Competencies Experience handling un-structured, semi-structured data, working in a data lake environment, leveraging data streaming and developing data pipelines driven by events/queues Hands on Experience and knowledge on real time/near real time processing and ready to code Hands on Experience in PySpark, Databricks, and Spark Sql. Knowledge on json, Parquet and Other file format and work effectively with them No Sql Databases Knowledge like Hbase, Mongo, Cosmos etc. Preferred Cloud Experience on Azure or AWS Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search etc. Team player, Reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross functional teams. Youll join an entrepreneurial, inclusive culture. One where we succeed together across the desk and around the globe. Where like-minded people work naturally together to achieve great things. Our Total Rewards program reflects our commitment to helping you achieve your ambitions in career, recognition, well-being, benefits and pay. Join us to develop your strengths and enjoy a fulfilling career full of varied experiences. Keep those ambitions in sights and imagine where JLL can take you.

Posted 1 month ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Chennai, Mumbai (All Areas)

Work from Office

Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII

Posted 1 month ago

Apply

5.0 - 10.0 years

16 - 20 Lacs

Bengaluru

Work from Office

Authorize.net makes it simple to accept electronic and credit card payments in person, online or over the phone. We ve been working with merchants and small businesses since 1996. As a leading payment gateway, Authorize.net is trusted by more than 445,000 merchants, handling more than 1 billion transactions and USD 149 billion in payments every year. As a Senior Staff Software Engineer at Authorize.net (a Visa solution), you will be a hands-on technical leader guide the development of major new features by translating complex business problems into technical solutions that resonate with our merchants and partners. You will also drive cross-team projects that standardize our approach to API development and data schemas, ensuring consistent implementation of best practices across the organization. Beyond features, you will also work on modernization, working across multiple teams to modernize our systems and deliver innovative online payment solutions. You will be instrumental in containerizing applications, splitting monolithic codebases into microservices, and migrating on-premises workloads to the cloud. In addition, you will enable process improvements through robust DevOps practices, incorporating comprehensive release management strategies and optimized CI/CD pipelines. Collaborating with product managers, tech leads, and engineering teams, you will define technology roadmaps, communicate architectural decisions, and mentor engineers in advanced technical approaches. This position requires a solid track record of delivering large-scale, reliable, and secure software solutions. While we prefer C# expertise, knowledge of other modern programming languages is also welcome. Basic Qualifications 15+ years of relevant work experience with a Bachelor s Degree or with an Advanced degree. Advanced level coding skills in C#, .Net Core, ASP.Net. Java experience is a plus Solid experience

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies