Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Positions: 1. Golang Developer/software engineer 2. Team Lead Golang Role & responsibilities As a Go/Golang Engineer/Team Lead you will be focusing on building and maintaining backend systems, APIs, and microservices using the Go programming language. Key responsibilities include designing and implementing scalable and performant solutions, collaborating with cross-functional teams, and ensuring code quality through testing and reviews. Skills Required - Golang - Kafka - REST API - Agile environment - Relational databases (PostgreSQL) - NoSQL databases (Couchbase, Cassandra) - Continuous integration tools (Jenkins, Gitlab CI) - Automated build and test frameworks - Containerization (Docker) - Container orchestration (Kubernetes) - Atlassian tools (JIRA, Confluence)
Posted 3 days ago
6.0 - 10.0 years
16 - 22 Lacs
Hyderabad, Pune, Chennai
Work from Office
Full stack developer - Java/Angular/Springboot / Kotlin / Kafka We are seeking a talented Full Stack Developer experienced in Java, Kotlin, Spring Boot, Angular, and Apache Kafka to join our dynamic engineering team. The ideal candidate will design, develop, and maintain end-to-end web applications and real-time data processing solutions, leveraging modern frameworks and event-driven architectures. Key Responsibilities Design, develop, and maintain scalable web applications using Java, Kotlin, Spring Boot, and Angular. Build and integrate RESTful APIs and microservices to connect frontend and backend components. Develop and maintain real-time data pipelines and event-driven features using Apache Kafka. Collaborate with cross-functional teams (UI/UX, QA, DevOps, Product) to define, design, and deliver new features. Write clean, efficient, and well-documented code following industry best practices and coding standards. Participate in code reviews, provide constructive feedback, and ensure code quality and consistency. Troubleshoot and resolve application issues, bugs, and performance bottlenecks in a timely manner. Optimize applications for maximum speed, scalability, and security. Stay updated with the latest industry trends, tools, and technologies, and proactively suggest improvements. Participate in Agile/Scrum ceremonies and contribute to continuous integration and delivery pipelines. Required Qualifications Experience with cloud-based technologies and deployment (Azure, GCP). Familiarity with containerization (Docker, Kubernetes) and microservices architecture. Proven experience as a Full Stack Developer with hands-on expertise in Java, Kotlin, Spring Boot, and Angular (Angular 2+). Strong understanding of object-oriented and functional programming principles. Experience designing and implementing RESTful APIs and integrating them with frontend applications. Proficiency in building event-driven and streaming applications using Apache Kafka. Experience with database systems (SQL/NoSQL), ORM frameworks (e.g., Hibernate, JPA), and SQL. Familiarity with version control systems (Git) and CI/CD pipelines. Good understanding of HTML5, CSS3, JavaScript, and TypeScript. Experience with Agile development methodologies and working collaboratively in a team environment. Excellent problem-solving, analytical, and communication skills.
Posted 3 days ago
2.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Must-Have Skills & Traits Core Engineering Advanced Python skills with a strong grasp of clean, modular, and maintainable code practices Experience building production-ready backend services using frameworks like FastAPI, Flask, or Django Strong understanding of software architecture, including RESTful API design, modularity, testing, and versioning. Experience working with databases (SQL/NoSQL), caching layers, and background job queues. AI/ML & GenAI Expertise Hands-on experience with machine learning workflows: data preprocessing, model training, evaluation, and deployment Practical experience with LLMs and GenAI tools such as OpenAI APIs, Hugging Face, LangChain, or Transformers Understanding of how to integrate LLMs into applications through prompt engineering, retrieval-augmented generation (RAG), and vector search Comfortable working with unstructured data (text, images) in real-world product environments Bonus: experience with model fine-tuning, evaluation metrics, or vector databases like FAISS, Pinecone, or Weaviate Ownership & Execution Demonstrated ability to take full ownership of features or modules from architecture to delivery Able to work independently in ambiguous situations and drive solutions with minimal guidance Experience collaborating cross-functionally with designers, PMs, and other engineers to deliver user-focused solutions Strong debugging, systems thinking, and decision-making skills with an eye toward scalability and performance Nice-to-Have Skills Experience in startup or fast-paced product environments. 2-5 years of relevant experience. Familiarity with asynchronous programming patterns in Python. Exposure to event-driven architecture and tools such as Kafka, RabbitMQ, or AWS EventBridge Data science exposure: exploratory data analysis (EDA), statistical modeling, or experimentation Built or contributed to agentic systems, ML/AI pipelines, or intelligent automation tools Understanding of MLOps: model deployment, monitoring, drift detection, or retraining pipelines Frontend familiarity (React, Tailwind) for prototyping or contributing to full-stack features Show more Show less
Posted 3 days ago
6.0 - 11.0 years
15 - 30 Lacs
Bengaluru
Hybrid
Role & responsibilities Majorly looking for (Java + Kafka ) or (Java + Kafka NOSQL Database) Technical Skills Required: Java: Strong understanding of OOP, multithreading, data structures, and design patterns. Kafka: Proficient in designing and building Kafka producers, consumers, and stream processing. Couchbase: Experience with data modeling, indexing, N1QL queries, and XDCR. Spring Boot, REST APIs: Strong experience in developing microServices and APIs. Familiarity with CI/CD tools and containerization (e.g., Docker, Kubernetes). Experience in performance tuning and monitoring of distributed systems. Version control using Git.
Posted 3 days ago
15.0 - 20.0 years
17 - 22 Lacs
Chennai
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Kafka.- Strong understanding of data warehousing concepts and architecture.- Familiarity with cloud platforms such as AWS or Azure.- Experience in SQL and NoSQL databases for data storage and retrieval. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Chennai.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 days ago
15.0 - 20.0 years
17 - 22 Lacs
Chennai
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Kafka.- Strong understanding of data warehousing concepts and architecture.- Familiarity with cloud platforms such as AWS or Azure.- Experience in SQL and NoSQL databases for data storage and retrieval. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Chennai.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 days ago
15.0 - 20.0 years
17 - 22 Lacs
Pune
Work from Office
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide innovative solutions to enhance data accessibility and usability. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to ensure efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Kafka, Apache Airflow, and cloud platforms such as AWS or Azure.- Strong understanding of data modeling and database design principles.- Experience with SQL and NoSQL databases for data storage and retrieval.- Familiarity with data warehousing concepts and tools. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education
Posted 3 days ago
10.0 - 15.0 years
12 - 17 Lacs
Bengaluru
Work from Office
Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Candidates must have experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems 10 - 15 years of experience in data engineering and architecting data platforms 5 – 8 years’ experience in architecting and implementing Data Platforms Azure Cloud Platform. 5 – 8 years’ experience in architecting and implementing Data Platforms on-prem (Hadoop or DW appliance) Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow. Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Candidates should have experience in delivering both business decision support systems (reporting, analytics) and data science domains / use cases
Posted 3 days ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 3 days ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 3 days ago
5.0 - 10.0 years
7 - 12 Lacs
Navi Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 3 days ago
5.0 - 10.0 years
7 - 12 Lacs
Navi Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 3 days ago
4.0 - 5.0 years
6 - 7 Lacs
Gurugram
Work from Office
Responsible for Develop processes to proactively monitor and alert for critical metrics. DSL Query writing experience & Development of Trend analysis graphs (Kibana dashboards) for critical events based on event correlation. Responsible for Implement and manage Logstash Pipelines. Responsible for Index management for better optimum efficiency. Index management for better optimum efficiency Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Education Qualification - BE/Btech/MCA/M.Tech, 4-5 yrs Experience in providing solutions using Elastic Stack Experience in Administering Production systems where Elastic stack runs. Experience in end-to-end low-level design, development and delivery of ELK based reporting solutions Preferred technical and professional experience Understand business requirements and create appropriate indexes documents. Index management for better optimum efficiency. Must be Proficient in elastic query for data analysis
Posted 3 days ago
4.0 - 9.0 years
6 - 11 Lacs
Bengaluru
Work from Office
The shift toward the consumption of IT as a service, i.e., the cloud, is one of the most important changes to happen to our industry in decades. At IBM, we are driven to shift our technology to an as-a-service model and to help our clients transform themselves to take full advantage of the cloud. With industry leadership in analytics, security, commerce, and cognitive computing and with unmatched hardware and software design and industrial research capabilities, no other company is as well positioned to address the full opportunity of cloud computing. We're looking for experienced cloud software engineers to join our App Dev services development team in India, Bangalore. We seek individuals who innovate & share our passion for winning in the cloud marketplace. You will be part of a strong, agile, and culture-driven engineering team responsible for enabling IBM Cloud to move quickly. We are running IBM's next generation cloud platform to deliver performance and predictability for our customers' most demanding workloads, at global scale and with leadership efficiency, resiliency and security. It is an exciting time, and as a team we are driven by this incredible opportunity to thrill our clients. Responsibilities Design and develop innovative, company and industry impacting services using open source and commercial technologies at scale Designing and architecting enterprise solutions to complex problems Presenting technical solutions and designs to engineering team Adhere to compliance requirements and secure engineering best practices Collaboration and review of technical designs with architecture and offering management Taking ownership and keen involvement in projects that vary in size and scope depending on requirements Writing and executing unit, functional, and integration test cases Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Demonstrated analytical skills and data structures/algorithms fundamentals Demonstrated verbal and written communications skills Demonstrated skills with troubleshooting, debugging, maintaining and improving existing software 4+ years overall experience in Development or Engineering experience. 2+ years of experience on Cloud architecture and developing Cloud native applications on Cloud 3+ years of experience with Golang or related programming language 3+ years of experience with React and Node or related programming language 3+ years of Experience developing REST API using Golang and and/or Python 3+ Experience with RESTful API design, Micro-services, ORM concepts, 2+ years of Experience with Docker and Kubernetes 2+ years of experience with UI e2e tools and experience with Accessibility. Experience working with any version control system (Git preferred) Preferred technical and professional experience Experience with Message Queues (Kafka and RabbitMQ Preferred) Experience with Relational Databases (Postgres preferred) Experience with Redis Caching Experience with HTML, Javascript, React and Node Experience developing test automation Experience with CI/CD pipelines
Posted 3 days ago
6.0 - 11.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Analyzes and designs software modules, features or components of software programs and develops related specifications. Develops, tests, documents and maintains complex software programs for assigned systems, applications and/or products. Gathers and evaluates software project requirements and apprises appropriate individual(s). Codes, tests and debugs new software or enhances existing software. Troubleshoots and resolves or recommends solutions to complex software problems. Provides senior level support and mentoring by evaluating product enhancements for feasibility studies and providing completion time estimates. Assists management with the planning, scheduling, and assigning of projects to software development personnel. Ensures product quality by participating in design reviews, code reviews, and other mechanisms. Participates in developing test procedures for system quality and performance. Writes and maintains technical documentation for assigned software projects. Provides initial input on new or modified product/application system features or enhancements for user documentation. Reviews user documentation for technical accuracy and completeness. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Over 6+ years of experience in software development tools and methods; related software languages; test design and configuration; related systems, applications, products and services. 5+ years of experience in enterprise applications using Java, J2EE and related technologies- Spring, Hibernate, Kafka, SQL, REST APIs, Microservices, JSP, etc. Familiarity with cloud computing services such as AWS, Azure, GCP. Hands-on experience with Oracle Databases or similar. Knowledge of scripting languages like Python and Perl. Experience in design and development of UI. Knowledge of different flavours of .JS (React, Angular, Node etc.) Ability to test and analyze data and provide recommendations, to organize tasks and determine priorities, ability to provide guidance to less experienced personnel. Preferred technical and professional experience Passion for mobile device technologies. Proven debugging and troubleshooting skills (memory, performance, battery usage, network usage optimization, etc.).
Posted 3 days ago
4.0 - 7.0 years
6 - 9 Lacs
Gurugram
Work from Office
A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics
Posted 3 days ago
4.0 - 7.0 years
6 - 9 Lacs
Gurugram
Work from Office
A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics
Posted 3 days ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 3 days ago
6.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Job Title: Senior Java Developer (Remote) Experience: 6 to 8 Years Location: Remote Employment Type: Full-Time Notice period: Immediate Joiner Job Summary: We are looking for a highly skilled and experienced Senior Java Developer to join our distributed team. The ideal candidate should have a strong background in developing scalable enterprise-grade applications using Java and related technologies, with exposure to full-stack development, system integration, and performance optimization. Key Responsibilities: Design and develop high-performance, scalable, and reusable Java-based applications. Build RESTful APIs with a strong understanding of RESTful architecture. Implement enterprise integration patterns using Apache Camel or Spring Integration. Ensure application security in compliance with OWASP guidelines. Write and maintain unit, integration, and BDD tests using JUnit, Cucumber, Selenium. Conduct performance and load testing; optimize through memory and thread dump analysis. Collaborate with product owners, QA teams, and other developers in Agile/Scrum environments. Participate in code reviews, architecture discussions, and mentoring junior developers. Technical Skills & Experience Required: Core Backend: Strong proficiency in Java (8 or higher) Proficient in Spring Boot , Spring Security , Spring MVC , Spring Data Solid experience with REST API design, implementation, and testing using Postman , SoapUI Unit Testing , Integration Testing , BDD Testing Web Services and Integration: Experience with XML , Web Services (RESTful and SOAP) , Apache CXF Knowledge of Enterprise Integration Patterns Exposure to Apache Camel or Spring Integration Frontend & Full Stack: Familiarity with HTML5 , CSS3 Experience with TypeScript , JavaScript , jQuery , Node.js Working knowledge of Webpack and Gulp Database & Data Streaming: Strong in RDBMS and Database Design (e.g., Oracle , PL/SQL ) Exposure to MongoDB and NoSQL Understanding of Kafka architecture , Kafka as a data streaming platform Performance & Security: Experience in Performance Analysis and Application Tuning Understanding of Security aspects and OWASP guidelines Experience with Memory & Thread Dump Analysis Cloud & DevOps: Working knowledge of Kubernetes Familiarity with Elastic solutions at the enterprise level Experience in Identity and Access Management tools like ForgeRock About IGT Solutions: IGT Solutions is a next-gen customer experience (CX) company, defining and delivering transformative experiences for the global and most innovative brands using digital technologies. With the combination of Digital and Human Intelligence, IGT becomes the preferred partner for managing end-to-end CX journeys across Travel and High Growth Tech industries. We have a global delivery footprint, spread across 30 delivery centers in China, Colombia, Egypt, India, Indonesia, Malaysia, Philippines, Romania, South Africa, Spain, UAE, the US, and Vietnam, with 25000+ CX and Technology experts from 35+ nationalities. IGT's Digital team collaborates closely with our customers business & technology teams to take solutions faster to market while sustaining quality while focusing on business value and improving overall end-Customer Experience. Our offerings include industry solutions as well as Digital services. We work with leading global enterprise customers to improve synergies between business & technology by enabling rapid business value realization leveraging Digital Technologies. These include lifecycle transformation & rapid development / technology solution delivery services delivered leveraging traditional as well as Digital Technologies, deep functional understanding and software engineering expertise. IGT is ISO 27001:2013, CMMI SVC Level 5 and ISAE-3402 compliant for IT, and COPC® Certified v6.0, ISO 27001:2013 and PCI DSS 3.2 certified for BPO processes. The organization follows Six Sigma rigor for process improvements. It is our policy to provide equal employment opportunities to all individuals based on job-related qualifications and ability to perform a job, without regard to age, gender, gender identity, sexual orientation, race, color, religion, creed, national origin, disability, genetic information, veteran status, citizenship or marital status, and to maintain a non-discriminatory environment free from intimidation, harassment or bias based upon these grounds. Show more Show less
Posted 3 days ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 3 days ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad
Work from Office
As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure
Posted 3 days ago
4.0 - 9.0 years
6 - 11 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / Data Bricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 3 days ago
7.0 - 12.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Your Role & Responsibilities: Looking to make a significant impactThis is your chance to become a key part of a dynamic team of talented professionals, leading the development and deployment of innovative, industry-leading, cloud-based AI services. We are seeking an experienced AI & Cloud Software Engineer to join us. This role designing, developing, and deploying AI-based services. You will be instrumental in problem-solving, automating wide ranges of tasks, and interfacing with other teams and solve complex problems. Responsibilities: Develop AI capabilities in IBM Cloud based applications Design and be an avid coder who can get his hands dirty and be involved in the coding to the deepest level. Work in an agile environment of continuous deliverable. You’ll have access to all the technical training courses you need to become the expert you want to be. Define all aspects of development from appropriate technology and workflow to coding standards Collaborate with other professionals to determine functional and non-functional requirements Participate in technical reviews of requirements, specifications, designs, code and other artifacts. Learn new skills and adopt new practices readily in order to develop innovative and cutting-edge software products that maintain Company’s technical leadership position. Required education Bachelor's Degree Required technical and professional expertise Required Skills: Minimum 7-12 years of experience as Full Stack Developer with a focus on AI projects Experience with AI and machine learning frameworks such as scikit-learn, TensorFlow, PyTorch, LLMs, Generative AI. Familiarity with AI model deployment and integration. Solid understanding of backend technologies, including server-side languages (Node.js, Python, Java, etc.) and databases (Cassandra, PostgreSQL, etc.). Understanding and experience with RESTful APIs, Java/J2EE, Kafka & GitHub. Strong experience with Cloud Technologies, Kubernetes based microservices architecture, Kafka, Object Storage, Cassandra database and docker container technologies. Knowledge on IBM Cloud Technologies will be an added advantage. At least 6 years of hands-on development experience building applications with one or more of the followingJava, Spring, Liberty, Node.js, Express.js, Golang, NoSQL DB, Redis, distributed caches, containers etc., At least 3 years of experience in building and operating highly secured, distributed cloud services with one or more of the followingIBM Cloud, AWS, Azure, SRE, CI/CD,Docker, Container orchestration, performance testing, etc., At least 3 years of experience in web technologiesHTTP, REST, JSON, HTML, Ajax, JavaScript etc., Solid understanding of the micro-services architecture and modern cloud programming practices. Strong ability to design a clean, developer-friendly API. Passionate about constant, continuous learning and applying new technologies as well as mentoring others. Keen troubleshooting skills and strong verbal/written communication skills. Preferred technical and professional experience Preferred Skills: Experience in using messaging brokers like RabbitMQ, Kafka etc. Operating Systems (such as Red Hat, Ubuntu, etc.) Knowledge of network protocols such as TCP/IP, HTTP, etc. Experience and working knowledge of version Control systems like Github and build tools like Maven/Gradle Ability to learn and apply new technologies quickly Experience in working on a SaaS application with high industry standard CI/CD, and development cycle processes Strong sense of ownership of deliverables UI test automation skills - Selenium and/or Puppeteer Beyond the requirements, candidates should be passionate about in the role: Continuous learning and ability to adapt to change Working across global teams and collaborating across teams and organization boundaries Finding innovative ways to solve complex problems with cutting edge technologies.
Posted 3 days ago
12.0 - 17.0 years
14 - 19 Lacs
Bengaluru
Work from Office
Your Role & Responsibilities: Looking to make a significant impactThis is your chance to become a key part of a dynamic team of talented professionals, leading the development and deployment of innovative, industry-leading, cloud-based AI services. We are seeking an experienced AI & Cloud Software Engineer to join us. This role designing, developing, and deploying AI-based services. You will be instrumental in problem-solving, automating wide ranges of tasks, and interfacing with other teams and solve complex problems. Responsibilities: Develop AI capabilities in IBM Cloud based applications Design and be an avid coder who can get his hands dirty and be involved in the coding to the deepest level. Work in an agile environment of continuous deliverable. You’ll have access to all the technical training courses you need to become the expert you want to be. Define all aspects of development from appropriate technology and workflow to coding standards Collaborate with other professionals to determine functional and non-functional requirements Participate in technical reviews of requirements, specifications, designs, code and other artifacts. Learn new skills and adopt new practices readily in order to develop innovative and cutting-edge software products that maintain Company’s technical leadership position. Required education Bachelor's Degree Required technical and professional expertise Required Expertise Full Stack & AI/ML 7–12 years' experience with AI/ML tools (scikit-learn, TensorFlow, PyTorch, LLMs), model deployment, and full-stack development. Backend & APIs Strong in Java, Python, Node.js, REST APIs, Kafka, and databases like Cassandra, PostgreSQL. Cloud & DevOps Expertise in IBM Cloud/AWS/Azure, Kubernetes, Docker, microservices, CI/CD, and SRE practices. Web & Architecture Proficient in web technologies (HTTP, JSON, HTML, JS) and modern cloud/microservices architecture with API design skills. Preferred technical and professional experience Preferred Expertise Messaging & OSExperience with Kafka, RabbitMQ, and Linux environments (Red Hat, Ubuntu). Networking & ToolsKnowledge of TCP/IP, HTTP protocols, GitHub, Maven/Gradle. SaaS & CI/CDBackground in SaaS apps, CI/CD pipelines, and agile development cycles. Testing & AutomationFamiliarity with UI test tools like Selenium or Puppeteer. MindsetOwnership, adaptability, global collaboration, and eagerness to solve complex problems with new tech.
Posted 3 days ago
5.0 - 7.0 years
7 - 9 Lacs
Bengaluru
Work from Office
As a senior SAP Consultant, you will serve as a client-facing practitioner working collaboratively with clients to deliver high-quality solutions and be a trusted business advisor with deep understanding of SAP Accelerate delivery methodology or equivalent and associated work products. You will work on projects that assist clients in integrating strategy, process, technology, and information to enhance effectiveness, reduce costs, and improve profit and shareholder value. There are opportunities for you to acquire new skills, work across different disciplines, take on new challenges, and develop a comprehensive understanding of various industries. Your primary responsibilities include: Strategic SAP Solution FocusWorking across technical design, development, and implementation of SAP solutions for simplicity, amplification, and maintainability that meet client needs. Comprehensive Solution DeliveryInvolvement in strategy development and solution implementation, leveraging your knowledge of SAP and working with the latest technologies. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Kafka, a popular distributed streaming platform, has gained significant traction in the tech industry in recent years. Job opportunities for Kafka professionals in India have been on the rise, with many companies looking to leverage Kafka for real-time data processing and analytics. If you are a job seeker interested in Kafka roles, here is a comprehensive guide to help you navigate the job market in India.
These cities are known for their thriving tech industries and have a high demand for Kafka professionals.
The average salary range for Kafka professionals in India varies based on experience levels. Entry-level positions may start at around INR 6-8 lakhs per annum, while experienced professionals can earn between INR 12-20 lakhs per annum.
Career progression in Kafka typically follows a path from Junior Developer to Senior Developer, and then to a Tech Lead role. As you gain more experience and expertise in Kafka, you may also explore roles such as Kafka Architect or Kafka Consultant.
In addition to Kafka expertise, employers often look for professionals with skills in: - Apache Spark - Apache Flink - Hadoop - Java/Scala programming - Data engineering and data architecture
As you explore Kafka job opportunities in India, remember to showcase your expertise in Kafka and related skills during interviews. Prepare thoroughly, demonstrate your knowledge confidently, and stay updated with the latest trends in Kafka to excel in your career as a Kafka professional. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.