Jobs
Interviews

548 Hbase Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Skills desired: Strong at SQL (Multi pyramid SQL joins) Python skills (FastAPI or flask framework) PySpark Commitment to work in overlapping hours GCP knowledge(BQ, DataProc and Dataflow) Amex experience is preferred(Not Mandatory) Power BI preferred (Not Mandatory) Flask, Pyspark, Python, Sql

Posted 1 week ago

Apply

2.0 - 7.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Products Technology Resources Mechatronics BIGDATA Scientist Developer Mechatronics BIGDATA Scientist Developer Department : Research Development Department Reporting to : General Manager Responsibility Key responsibilities Selecting features, building optimizing classifiers using machine learning techniques. Data mining using state-of-the-art methods. Extending company s data with third party sources of information when needed. Enhancing data collection procedures to include information that is relevant for building analytic systems. Processing, cleansing, and verifying the integrity of data used for analysis. Performing ad-hoc analysis and presenting results in a clear manner. Creating automated anomaly detection systems and constant tracking of its performance. Behavioral Competencies Data-oriented person. Skills Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. Experience with common data science toolkits, such as R, Weka, NumPy, MatLab, etc. Depending on our specific project requirements . Excellence in at least one of these NumPy or R is highly desirable. Experience with data visualisation tools, such as D3.js, GGplot, etc. Proficiency in using query languages such as SQL, Hive, Pig, NiFi {{actual list depends on what you are currently using in your company}}. Experience with NoSQL DB s such as InFleuxDB, MongoDB, Cassandra, HBase. Good applied statistics skills, such as distributions, statistical testing, regression, etc. Good scripting programming skills PHP, Slim, SQL, Laravel. Hadoop, HDFS, NiFi. Other Professional Training, if any Any certification related to BigData. Share this position Facebook LinkedIn WhatsApp Are you the right fit Essential qualification : MTech./ MS/Mechatronics / Computer or equivalent Experience : 2 years of proficient experience working and developing SDKs for any platform Location : Bengaluru, Karnataka Didnt you find your position Let us know more about your capabilities and if you re a relevant candidate, we will get back to you swiftly. Are you the right fit Apply for this position Thank you for considering BFW as your future workplace! We invest a significant amount of energy and focus into creating an excellent workspace, offering numerous opportunities and promising careers. If you re interested, please feel free to provide us with more information. Please note that only relevant candidates will be contacted. First name Last name Email ID Confirm email ID Current address Phone number Tell us more about your previous experience (title, company, how long did you work there, your responsibilities...) Experience details Education (the highest) Major Degree Description Studies (from - to) Have you ever worked for BFW Current salary Expected salary Upload your CV and other useful documents Apply now Join our team Make a difference Our mission is to contribute to the advancement of humanity through technology. But we can t achieve that without you. Global clients Here s your opportunity to work for some of the leading global brands in the world, such as Toyota and Bosch. Bharat Fritz Werner Ltd. (BFW) is a pioneering name in machine tools, manufacturing solutions, and technological innovation. Follow us on PRODUCTS RESOURCES Contact BFW UPDATES Subscribe to get notified about latest events, new products, industry insights and other updates from BFW, directly to your inbox.

Posted 1 week ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Coimbatore

Work from Office

Position Name: Data Engineer Location: Coimbatore (Hybrid 3 days per week) Work Shift Timing: 1.30 pm to 10.30 pm (IST) Mandatory Skills: SCALA, Spark, Python, Data bricks Good to have: Java & Hadoop The Role: Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements: Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala). Hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Solid understanding of batch and streaming data processing techniques. Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion. Expert-level ability to write complex, optimized SQL queries across extensive data volumes. Experience on HDFS, Nifi, Kafka. Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB Familiarity with Agile methodologies. Obsession for service observability, instrumentation, monitoring, and alerting. Knowledge or experience in architectural best practices for building data lakes. Interested candidates share your resume at Neesha1@damcogroup.com along with the below mentioned details : Total Exp : Relevant Exp in Scala & Spark : Current CTC: Expected CTC: Notice period : Current Location:

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

As a Big Data Architect working on a contract basis for a renowned client, you will be responsible for utilizing your expertise in technologies such as Hadoop, NoSQL, Spark, PySpark, Spark Time Streaming, Elastic Search, Kafka, Scala/Java, and ETL platforms including HBase, Cassandra, and MongoDB. Your primary role will involve ensuring the completion of surveys and addressing any queries promptly. You will play a crucial part in conceptualizing action plans by engaging with clients, Delivery Managers, vertical delivery heads, and service delivery heads. Your responsibilities will also include driving account-wise tracking of action plans aimed at enhancing Customer Satisfaction (CSAT) across various projects. You will be involved in conducting Quarterly pulse surveys for selected accounts or projects to ensure periodic check-ins and feedback collection. Furthermore, you will provide support to the Account Leadership teams in tracking and managing client escalations effectively to ensure timely closure. With over 10 years of experience and a solid educational background in Any Graduation, you will contribute to the success of projects in a hybrid work mode. Immediate availability to join is essential for this role based in Hyderabad.,

Posted 1 week ago

Apply

0.0 - 3.0 years

20 - 25 Lacs

Bengaluru

Work from Office

YOUR IMPACT Are you passionate about developing mission-critical, high quality software solutions, using cutting-edge technology, in a dynamic environment? We are Compliance Engineering, a global team of more than 300 engineers and scientists who work on the most complex, mission-critical problems. We: build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. have access to the latest technology and to massive amounts of structured and unstructured data. leverage modern frameworks to build responsive and intuitive UX/UI and Big Data applications. Compliance Engi neering is looking to fill several big data software engineering roles Your first deliverable and success criteria will be the deployment, in 2025, of new complex data pipelines and surveillance models to detect inappropriate trading activity. HOW YOU WILL FULFILL YOUR POTENTIAL As a member of our team, you will: partner globally with sponsors, users and engineering colleagues across multiple divisions to create end-to-end solutions, learn from experts, leverage various technologies including; Java, Spark, Hadoop, Flink, MapReduce, HBase, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes be able to innovate and incubate new ideas, have an opportunity to work on a broad range of problems, including negotiating data contracts, capturing data quality metrics, processing large scale data, building surveillance detection models, be involved in the full life cycle; defining, designing, implementing, testing, deploying, and maintaining software systems across our products. QUALIFICATIONS A successful candidate will possess the following attributes: A Bachelors or Masters degree in Computer Science, Computer Engineering, or a similar field of study. Expertise in java, as well as proficiency with databases and data manipulation. Experience in end-to-end solutions, automated testing and SDLC concepts. The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Experience in the some of following is desired and can set you apart from other candidates : developing in large-scale systems, such as MapReduce on Hadoop/Hbase, data analysis using tools such as SQL, Spark SQL, Zeppelin/Jupyter, API design, such as to create interconnected services, knowledge of the financial industry and compliance or risk functions, ability to influence stakeholders.

Posted 1 week ago

Apply

0.0 - 3.0 years

20 - 25 Lacs

Hyderabad

Work from Office

YOUR IMPACT Are you passionate about developing mission-critical, high quality software solutions, using cutting-edge technology, in a dynamic environment? We are Compliance Engineering, a global team of more than 300 engineers and scientists who work on the most complex, mission-critical problems. We: build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. have access to the latest technology and to massive amounts of structured and unstructured data. leverage modern frameworks to build responsive and intuitive UX/UI and Big Data applications. Compliance Engi neering is looking to fill several big data software engineering roles Your first deliverable and success criteria will be the deployment, in 2025, of new complex data pipelines and surveillance models to detect inappropriate trading activity. HOW YOU WILL FULFILL YOUR POTENTIAL As a member of our team, you will: partner globally with sponsors, users and engineering colleagues across multiple divisions to create end-to-end solutions, learn from experts, leverage various technologies including; Java, Spark, Hadoop, Flink, MapReduce, HBase, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes be able to innovate and incubate new ideas, have an opportunity to work on a broad range of problems, including negotiating data contracts, capturing data quality metrics, processing large scale data, building surveillance detection models, be involved in the full life cycle; defining, designing, implementing, testing, deploying, and maintaining software systems across our products. QUALIFICATIONS A successful candidate will possess the following attributes: A Bachelors or Masters degree in Computer Science, Computer Engineering, or a similar field of study. Expertise in java, as well as proficiency with databases and data manipulation. Experience in end-to-end solutions, automated testing and SDLC concepts. The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Experience in the some of following is desired and can set you apart from other candidates : developing in large-scale systems, such as MapReduce on Hadoop/Hbase, data analysis using tools such as SQL, Spark SQL, Zeppelin/Jupyter, API design, such as to create interconnected services, knowledge of the financial industry and compliance or risk functions, ability to influence stakeholders.

Posted 1 week ago

Apply

5.0 - 8.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Job Summary As a Software Engineer at NetApp India’s R&D division, you will be responsible for the design, development and validation of software for Big Data Engineering across both cloud and on-premises environments. You will be part of a highly skilled technical team named NetApp Active IQ. The Active IQ DataHub platform processes over 10 trillion data points per month that feeds a multi-Petabyte DataLake. The platform is built using Kafka, a serverless platform running on Kubernetes, Spark and various NoSQL databases. This platform enables the use of advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage, and then provides the insights and actions to make it happen. We call this “actionable intelligence”. Job Requirements Design and build our Big Data Platform, and understand scale, performance and fault-tolerance • Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community. • Identify the right tools to deliver product features by performing research, POCs and interacting with various open-source forums • Work on technologies related to NoSQL, SQL and in-memory databases • Conduct code reviews to ensure code quality, consistency and best practices adherence. Technical Skills • Big Data hands-on development experience is required. • Demonstrate up-to-date expertise in Data Engineering, complex data pipeline development. • Design, develop, implement and tune distributed data processing pipelines that process large volumes of data; focusing on scalability, low -latency, and fault-tolerance in every system built. • Awareness of Data Governance (Data Quality, Metadata Management, Security, etc.) • Experience with one or more of Python/Java/Scala. • Knowledge and experience with Kafka, Storm, Druid, Cassandra or Presto is an added advantage. Education • A minimum of 5 years of experience is required. 5-8 years of experience is preferred. • A Bachelor of Science Degree in Electrical Engineering or Computer Science, or a Master Degree; or equivalent experience is required.

Posted 1 week ago

Apply

7.0 - 12.0 years

7 - 11 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Pune / Bangalore Band M3/M4 (7 to 14 years) Role Description: Must Have Skills: Should have experience in PySpark and Scala + Spark for 4+ years (Min experience). Proficient in debugging and data analysis skills. Should have Spark experience of 4+ years Should have understanding of SDLC and Big Data Application Life Cycle Should have experience in GIT HUB and GIT commands Good to have experience in CICD tools such Jenkins and Ansible Fast problem solving and self-starter Should have experience in using Control-M and Service Now (for Incident management ) Positive attitude, good communication skills (written and verbal both), should not have mother tongue interference. WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid

Posted 1 week ago

Apply

3.0 - 7.0 years

10 - 14 Lacs

Chennai

Work from Office

Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices, Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)- Spark Good to have Python. Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands, Experience in Concurrent design and multi-threading. Preferred technical and professional experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python.

Posted 1 week ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Kochi

Work from Office

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc

Posted 1 week ago

Apply

2.0 - 6.0 years

12 - 16 Lacs

Kochi

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re - write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 week ago

Apply

2.0 - 6.0 years

12 - 16 Lacs

Bengaluru

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re- write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 week ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Bengaluru

Work from Office

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Candidates must have experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems 10 - 15 years of experience in data engineering and architecting data platforms 5 – 8 years’ experience in architecting and implementing Data Platforms Azure Cloud Platform. 5 – 8 years’ experience in architecting and implementing Data Platforms on-prem (Hadoop or DW appliance) Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow. Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Candidates should have experience in delivering both business decision support systems (reporting, analytics) and data science domains / use cases

Posted 1 week ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Mysuru

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Asset & Wealth Management, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. Executes software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems. Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development. Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture. Contributes to software engineering communities of practice and events that explore new and emerging technologies. Adds to team culture of diversity, opportunity, inclusion, and respect. Required qualifications, capabilities, and skills: Formal training or certification on software engineering concepts and 3+ years applied experience. Strong skills around object-oriented analysis and design (OOAD), data structures, algorithms, design patterns. Strong knowledge and hands-on experience in Key technologies Java (SpringBoot, Dropwizard or equivalent framework), Spring Boot, Containerization (Docker and Kubernetes) and OracleDB. Hands-on experience in Microservices, RESTful web services development, and WebSockets. Experience with messaging and integration frameworks like JMS, RabbitMQ, AMQP, MQ, Kafka. Experience developing with testing frameworks such as JUnit, Mockito, Karma, Protractor, Jasmine, Mocha, Selenium, and Cucumber. Experience with JDBC/JPBA frameworks such as Hibernate or MyBatis. Thorough understanding of the System Development Life Cycle and Development methodologies including Agile. Experience with SQL databases such as Sybase and Oracle. Command of architecture, design, and business processes. Ability to manage relationships with Business stakeholders. Organize and prioritize within complex delivery programs. Proficient in a front end technology either React/ReactJS, Redux, Angular/AngularJS, ExtJS, JQuery, NodeJS, and other Web frameworks. Working experience in any public Cloud like AWS, Azure, GCP, and private cloud(Cloud Foundry). Preferred qualifications, capabilities, and skills: Experience working in a financial services environment. Technology coach and help teams solve technology problems. SRE concepts like monitoring, log tracing. Good to have with NoSQL databases such as HBase, Cassandra, and tools such as Apache Spark.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

The Engineer Intmd Analyst is an intermediate level position responsible for a variety of engineering activities including the design, acquisition and development of hardware, software and network infrastructure in coordination with the Technology team. The overall objective of this role is to ensure quality standards are being met within existing and planned frameworks. Responsibilities: Provide assistance with a product or product component development within the technology domain. Conduct product evaluations with vendors and recommend product customization for integration with systems. Assist with training activities, mentor junior team members and ensure teams adherence to all control and compliance initiatives. Assist with application prototyping and recommend solutions around implementation. Provide third line support to identify the root cause of issues and react to systems and application outages or networking issues. Support projects and provide project status updates to project manager or Sr. Engineer. Partner with development teams to identify engineering requirements and assist with defining application/system requirements and processes. Create installation documentation, training materials, and deliver technical training to support the organization. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 5-8 years of relevant experience in an Engineering role. Experience working in Financial Services or a large complex and/or global environment. Involvement in DevOps activities (SRE/LSE Auto Deployment/Self Healing) and Application Support. Tech Stack: Basic - Java/python, Unix, Oracle. Essential Skills: IT experience working in one of Hbase, HDFS, Kafka, Neo4J, Akka, Spark, Storm and GemFire. IT Support experience working in Unix, Cloud & Windows environments. Experience supporting RDBMS DB like MongoD, ORACLE, Sybase, MS SQL & DB2. Supported Applications deployed in Websphere, Weblogic, IIS and Tomcat. Familiar with Autosys and setup. Understanding of client server architecture (clustered and non-clustered). Basic Networking knowledge (Load balancers, Network Protocols). Working knowledge of Lookup Active Directory Protocol(LDAP) and Single Sign On concepts. Service Now expertise. Experience working in Multiple Application Support Model is preferred. Consistently demonstrates clear and concise written and verbal communication. Comprehensive knowledge of design metrics, analytics tools, benchmarking activities and related reporting to identify best practices. Demonstrated analytic/diagnostic skills. Ability to work in a matrix environment and partner with virtual teams. Ability to work independently, prioritize, and take ownership of various parts of a project or initiative. Ability to work under pressure and manage to tight deadlines or unexpected changes in expectations or requirements. Proven track record of operational process change and improvement. Education: Bachelors degree/University degree or equivalent experience.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

At Improzo, we are dedicated to improving life by empowering our customers through quality-led commercial analytical solutions. Our team of experts in commercial data, technology, and operations collaborates to shape the future and work with leading Life Sciences clients. We prioritize customer success and outcomes, embrace agility and innovation, foster respect and collaboration, and are laser-focused on quality-led execution. As a Data and Reporting Developer (Improzo Level - Associate) at Improzo, you will play a crucial role in designing, developing, and maintaining large-scale data processing systems using big data technologies. You will collaborate with data architects and stakeholders to implement data storage solutions, develop ETL pipelines, integrate various data sources, design and build reports, optimize performance, and ensure seamless data flow. Key Responsibilities: - Design, develop, and maintain scalable data pipelines and big data applications using distributed processing frameworks. - Collaborate on data architecture, storage solutions, ETL pipelines, data lakes, and data warehousing. - Integrate data sources into the big data ecosystem while maintaining data quality. - Design and build reports using tools like Power BI, Tableau, and Microstrategy. - Optimize workflows and queries for high performance and scalability. - Collaborate with cross-functional teams to deliver data solutions that meet business requirements. - Perform testing, quality assurance, and documentation of data pipelines. - Participate in agile development processes and stay up-to-date with big data technologies. Qualifications: - Bachelor's or master's degree in a quantitative field. - 1.5+ years of experience in data management or reporting projects with big data technologies. - Hands-on experience or thorough training in AWS, Azure, GCP, Databricks, and Spark. - Experience in Pharma Commercial setting or Pharma data management is advantageous. - Proficiency in Python, SQL, MDM, Tableau, PowerBI, and other tools. - Excellent communication, presentation, and interpersonal skills. - Attention to detail, quality, and client centricity. - Ability to work independently and as part of a cross-functional team. Benefits: - Competitive salary and benefits package. - Opportunity to work on cutting-edge tech projects in the life sciences industry. - Collaborative and supportive work environment. - Opportunities for professional development and growth.,

Posted 1 week ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Mumbai

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Data Engineering Good to have skills : NAMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the business environment. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking ways to improve processes and deliver high-quality solutions. Roles & Responsibilities:Design, build, and maintain scalable ETL/ELT pipelines for structured and unstructured data.Monitor and analyze key performance metrics (e.g., CTR, CPC, ROAS) to support business objectives Implement real-time data workflows with anomaly detection and performance reporting.Develop and maintain data infrastructure using tools such as Spark, Hadoop, Kafka, and AirflowCollaborate with DevOps teams to deploy data solutions in containerized environments (Docker, Kubernetes).Partner with data scientists to prepare, cleanse, and transform data for modeling.Support the development of predictive models using tools like BigQuery ML and Scikit-learn Work closely with stakeholders across product, design, and executive teams to understand data needs Ensure compliance with data governance, privacy, and security standards. Professional & Technical Skills: 1-2 years of experience in data engineering or a similar role.Familiarity with cloud platforms (AWS, GCP, or Azure) and big data tools (Hive, HBase, Spark).Familiarity with DevOps practices and CI/CD pipelines. Additional InformationThis position is based at our Mumbai office.Masters degree in Computer Science, Engineering, or a related field. Qualification 15 years full time education

Posted 1 week ago

Apply

2.0 - 5.0 years

30 - 35 Lacs

Hyderabad

Work from Office

build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm- have access to the latest technology and to massive amounts of structured and unstructured data- leverage modern frameworks to build responsive and intuitive UX/UI and Big Data applications- Compliance Engi neering is looking to fill several big data software engineering roles Your first deliverable and success criteria will be the deployment, in 2025, of new complex data pipelines and surveillance models to detect inappropriate trading activity- HOW YOU WILL FULFILL YOUR POTENTIAL partner globally with sponsors, users and engineering colleagues across multiple divisions to create end-to-end solutions, learn from experts, leverage various technologies including; Java, Spark, Hadoop, Flink, MapReduce, HBase, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes be able to innovate and incubate new ideas, have an opportunity to work on a broad range of problems, including negotiating data contracts, capturing data quality metrics, processing large scale data, building surveillance detection models, be involved in the full life cycle; defining, designing, implementing, testing, deploying, and maintaining software systems across our products- QUALIFICATIONS A Bachelors or Masters degree in Computer Science, Computer Engineering, or a similar field of study- Expertise in java, as well as proficiency with databases and data manipulation- Experience in end-to-end solutions, automated testing and SDLC concepts- The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper- Experience in the some of following is desired and can set you apart from other candidates : developing in large-scale systems, such as MapReduce on Hadoop/Hbase, data analysis using tools such as SQL, Spark SQL, Zeppelin/Jupyter, API design, such as to create interconnected services, knowledge of the financial industry and compliance or risk functions, ability to influence stakeholders

Posted 1 week ago

Apply

4.0 - 7.0 years

25 - 30 Lacs

Ahmedabad

Work from Office

ManekTech is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 1 week ago

Apply

7.0 - 8.0 years

15 - 16 Lacs

Pune

Work from Office

Some careers shine brighter than others. If you re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will : Design and engineer software with the customer/user experience as a key objective Actively contributes to Technology Engineering Practice by sharing Subject Matter Expertise from their area of specialism, best practice and learnings. Drives adherence to all standards and policies within their area of Technology Engineering. Delivery and support of data related infrastructure and architecture to optimize data storage and consumption across the bank, including addressing functional and non-functional requirements relevant to data in large applications. Design and develop applications for internal and external users focusing on the interface and front-end usability of the application. Engineer and implement security measures for the protection of internal and external systems, networks, products and services. Establish a digital environment and automate processes to minimize variation and ensure predictable high quality code and data. Provide support in identification and resolution of all incidents associated with the IT service, as directed by leadership of the DevOps team. Ensure service resilience, service sustainability and recovery time objectives are met for all the software solutions delivered. Keep up to date and have expertise on current tools, technologies and areas like cyber security and regulations pertaining to aspects like data privacy, consent, data residency etc. that are applicable Ensuring compliance with all relevant controls and standards. Requirements At least 5 years IT working experience on development or application support, preferably in an enterprise or a global environment. At least 5 years Development and Production support experience in BI and Data warehouse experience Strong system and business analysis skills with excellent knowledge of project life cycle and application development process Strong problem solving skills, able to work independently and also within a global team. Excellent command of written and spoken English, with strong communication skills to work with partners globally. Enthusiastic and self-motivated with excellent time management skills. Flexible and adaptable in accommodating change of requirement and willing to take new responsibilities when necessary. Ability to work under stress and deliver in a responsive manner. Strong knowledge on latest data technologies on Hadoop and Public Cloud platforms (GCP & AWS)7 Includes at least CDC, Kafka, Spark (streaming and batch), Kinesis, Distributed stores (such as HBase, Hive, Presto) and file systems (such as S3/HDFS) Experience in building distributed / service oriented / micro services style and cloud based architectures API design knowledge i7e7 REST and standards like JSON Design and architecture experience in either AWS & GCP Familiar with Big data, Hadoop, Spark, BI, ETL, DB2, Oracle, Java, and knowledge on cloud i. e. GCP even private cloud Familiar with DevOps (Jenkins, Ansible, Terraform, SonarQube etc) and container stack(K8s, Docker, Google Kubernetes Engine)

Posted 1 week ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are looking for a Big Data Developer to build and maintain scalable data processing systems. The ideal candidate will have experience handling large datasets and working with distributed computing frameworks. Key Responsibilities: Design and develop data pipelines using Hadoop, Spark, or Flink. Optimize big data applications for performance and reliability. Integrate various structured and unstructured data sources. Work with data scientists and analysts to prepare datasets. Ensure data quality, security, and lineage across platforms. Required Skills & Qualifications: Experience with Hadoop ecosystem (HDFS, Hive, Pig) and Apache Spark. Proficiency in Java, Scala, or Python. Familiarity with data ingestion tools (Kafka, Sqoop, NiFi). Strong understanding of distributed computing principles. Knowledge of cloud-based big data services (e.g., EMR, Dataproc, HDInsight). Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Visakhapatnam

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Surat

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply

7.0 - 9.0 years

8 - 14 Lacs

Varanasi

Work from Office

Job Summary: We are looking for a seasoned Tech Anchor with deep expertise in Big Data technologies and Python to lead technical design, development, and mentoring across data-driven projects. This role demands a strong grasp of scalable data architecture, problem-solving capabilities, and hands-on experience with distributed systems and modern data frameworks. Key Responsibilities: Provide technical leadership across Big Data and Python-based projects Architect, design, and implement scalable data pipelines and processing systems Guide teams on best practices in data modeling, ETL/ELT development, and performance optimization Collaborate with data scientists, analysts, and stakeholders to ensure effective data solutions Conduct code reviews and mentor junior engineers to improve code quality and skills Evaluate and implement new tools and frameworks to enhance data capabilities Troubleshoot complex data-related issues and support production deployments Ensure compliance with data security and governance standards

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies