Home
Jobs

7132 Kafka Jobs - Page 26

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

What Youll Do Architect and scale modern data infrastructure: ingestion, transformation, warehousing, and access Define and drive enterprise data strategygovernance, quality, security, and lifecycle management Design scalable data platforms that support both operational insights and ML/AI applications Translate complex business requirements into robust, modular data systems Lead cross-functional teams of engineers, analysts, and developers on large-scale data initiatives Evaluate and implement best-in-class tools for orchestration, warehousing, and metadata management Establish technical standards and best practices for data engineering at scale Spearhead integration efforts to unify data across legacy and modern platforms What You Bring Experience in data engineering, architecture, or backend systems Strong grasp of system design, distributed data platforms, and scalable infrastructure Deep hands-on experience with cloud platforms (AWS, Azure, or GCP) and tools like Redshift, BigQuery, Snowflake, S3, Lambda Expertise in data modeling (OLTP/OLAP), ETL pipelines, and data warehousing Experience with big data ecosystems: Kafka, Spark, Hive, Presto Solid understanding of data governance, security, and compliance frameworks Proven track record of technical leadership and mentoring Strong collaboration and communication skills to align tech with business Bachelors or Masters in Computer Science, Data Engineering, or a related field Nice To Have (Your Edge) Experience with real-time data streaming and event-driven architectures Exposure to MLOps and model deployment pipelines Familiarity with data DevOps and Infra as Code (Terraform, CloudFormation, CI/CD pipelines) (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

1.0 years

0 Lacs

Surat, Gujarat, India

On-site

Linkedin logo

We are looking for a skilled and motivated Node.js Backend Developer to join our dynamic team. You will be responsible for developing, maintaining, and optimizing scalable backend solutions that power our web and mobile applications. Key Responsibilities Design, develop, and maintain RESTful APIs using Node.js Write clean, scalable, and efficient backend code Integrate third-party APIs and data sources Optimize application performance and scalability Collaborate with frontend developers, designers, and product teams Troubleshoot, debug, and upgrade existing systems Ensure high-quality code through code reviews and unit testing Implement security and data protection best practices Work with databases like MongoDB, MySQL, or PostgreSQL Participate in agile development processes (Scrum/ Kanban) Requirements 1-3 years of experience in Node.js backend development Strong understanding of JavaScript (ES6+), Node.js, and Express.js Experience working with databases : MongoDB / MySQL / PostgreSQL Familiarity with RESTful API design and integration Knowledge of authentication & authorization (JWT, OAuth) Experience with version control systems like Git Good understanding of asynchronous programming Knowledge of API security, performance optimization, and scalability Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus Experience with Docker & CI/CD pipelines is a plus Soft Skills Problem-solving attitude Strong communication & collaboration skills Ability to work independently and in a team Willingness to learn and adapt to new technologies Good To Have Experience with GraphQL Familiarity with Microservices architecture Experience with message brokers like RabbitMQ or Kafka Basic knowledge of DevOps practices Salary : 6-10LPA (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

7.0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

Omni's team is passionate about Commerce and Digital Transformation. We've been successfully delivering Commerce solutions for clients across North America, Europe, Asia, and Australia. The team has experience executing and delivering projects in B2B and B2C solutions. Job Description This is a remote position. We are seeking a Senior Data Engineer to architect and build robust, scalable, and efficient data systems that power AI and Analytics solutions. You will design end-to-end data pipelines, optimize data storage, and ensure seamless data availability for machine learning and business analytics use cases. This role demands deep engineering excellence balancing performance, reliability, security, and cost to support real-world AI applications. Key Responsibilities Architect, design, and implement high-throughput ETL/ELT pipelines for batch and real-time data processing. Build cloud-native data platforms : data lakes, data warehouses, feature stores. Work with structured, semi-structured, and unstructured data at petabyte scale. Optimize data pipelines for latency, throughput, cost-efficiency, and fault tolerance. Implement data governance, lineage, quality checks, and metadata management. Collaborate closely with Data Scientists and ML Engineers to prepare data pipelines for model training and inference. Implement streaming data architectures using Kafka, Spark Streaming, or AWS Kinesis. Automate infrastructure deployment using Terraform, CloudFormation, or Kubernetes operators. Requirements 7+ years in Data Engineering, Big Data, or Cloud Data Platform roles. Strong proficiency in Python and SQL. Deep expertise in distributed data systems (Spark, Hive, Presto, Dask). Cloud-native engineering experience (AWS, GCP, Azure) : BigQuery, Redshift, EMR, Databricks, etc. Experience designing event-driven architectures and streaming systems (Kafka, Pub/Sub, Flink). Strong background in data modeling (star schema, OLAP cubes, graph databases). Proven experience with data security, encryption, compliance standards (e.g., GDPR, HIPAA). Preferred Skills Experience in MLOps enablement : creating feature stores, versioned datasets. Familiarity with real-time analytics platforms (Clickhouse, Apache Pinot). Exposure to data observability tools like Monte Carlo, Databand, or similar. Passionate about building high-scale, resilient, and secure data systems. Excited to support AI/ML innovation with state-of-the-art data infrastructure. Obsessed with automation, scalability, and best engineering practices. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

Data Engineer - Google Cloud Location : Remote, India About Us Aviato Consulting is looking for a highly skilled and motivated Data Engineer to join our expanding team. This role is ideal for someone with a deep understanding of cloud-based data solutions, with a focus on Google Cloud (GCP) and associated technologies. GCP certification is mandatory for this position to ensure the highest level of expertise and professionalism. You will work directly with clients, translating their business requirements into scalable data solutions, while providing technical expertise and guidance. Key Responsibilities Client Engagement : Work closely with clients to understand business needs, gather technical requirements, and design solutions leveraging GCP services. Data Pipeline Design & Development : Build and manage scalable data pipelines using tools such as Apache Beam, Cloud Dataflow, and Cloud Composer. Data Warehousing & Lake Solutions : Architect, implement, and optimize BigQuery-based data lakes and warehouses. Real-Time Data Processing : Implement and manage streaming data pipelines using Kafka, Pub/Sub, and similar technologies. Data Analysis & Visualization : Create insightful data dashboards and visualizations using tools like Looker, Data Studio, or Tableau. Technical Leadership & Mentorship : Provide guidance and mentorship to team members and clients, helping them leverage the full potential of Google Cloud. Required Qualifications Experience : 5+ years as a Data Engineer working with cloud-based platforms. Proven experience in Python with libraries like Pandas and NumPy. Strong understanding and experience with FastAPI for building APIs. Expertise in building data pipelines using Apache Beam, Cloud Dataflow, or similar tools. Solid knowledge of Kafka for real-time data streaming. Proficiency with BigQuery, Google Pub/Sub, and other Google Cloud services. Familiarity with Apache Hadoop for distributed data processing. Technical Skills Strong understanding of data architecture and processing techniques. Experience with big data environments and tools like Apache Hadoop. Solid understanding of ETL pipelines, data ingestion, transformation, and storage. Knowledge of data modeling, data warehousing, and big data management principles. Certifications Google Cloud certification (Professional Data Engineer, Cloud Architect) is mandatory for this role. Soft Skills Excellent English communication skills. Client-facing experience and the ability to manage client relationships effectively. Strong problem-solving skills with a results-oriented approach. Preferred Qualifications Visualization Tools : Experience with tools like Looker, Power BI, or Tableau. Benefits Competitive salary and benefits package. Opportunities to work with cutting-edge cloud technologies with large customers. Collaborative work environment that encourages learning and professional growth. A chance to work on high-impact projects for leading clients in diverse industries. If you're passionate about data engineering, cloud technologies, and solving complex data problems for clients, wed love to hear from you! (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

About Sleek Through proprietary software and AI, along with a focus on customer delight, Sleek makes the back-office easy for micro SMEs. We give Entrepreneurs time back to focus on what they love doing growing their business and being with customers. With a surging number of Entrepreneurs globally, we are innovating in a highly lucrative space. We Operate 3 Business Segments Corporate Secretary : Automating the company incorporation, secretarial, filing, Nominee Director, mailroom and immigration processes via custom online robots and SleekSign. We are the market leaders in Singapore with : 5% market share of all new business incorporations. Accounting & Bookkeeping : Redefining what it means to do Accounting, Bookkeeping, Tax and Payroll thanks to our proprietary SleekBooks ledger, AI tools and exceptional customer service. FinTech payments : Overcoming a key challenge for Entrepreneurs by offering digital banking services to new businesses. Sleek launched in 2017 and now has around 15,000 customers across our offices in Singapore, Hong Kong, Australia and the UK. We have around 450 staff with an intact startup mindset. We have achieved >70% compound annual growth in Revenue over the last 5 years and as a result have been recognised by The Financial Times, The Straits Times, Forbes and LinkedIn as one of the fastest growing companies in Asia. Role Backed by world-class investors, we are on track to be one of the few cash flow positive, tech-enabled unicorns based out of The Role : We are looking for an experienced Senior Data Engineer to join our growing team. As a key member of our data team, you will design, build, and maintain scalable data pipelines and infrastructure to enable data-driven decision-making across the organization. This role is ideal for a proactive, detail-oriented individual passionate about optimizing and leveraging data for impactful business : Work closely with cross-functional teams to translate our business vision into impactful data solutions. Drive the alignment of data architecture requirements with strategic goals, ensuring each solution not only meets analytical needs but also advances our core objectives. 3, Be pivotal in bridging the gap between business insights and technical execution by tackling complex challenges in data integration, modeling, and security, and by setting the stage for exceptional data performance and insights. Shape the data roadmap, influence design decisions, and empower our team to deliver innovative, scalable, high-quality data solutions every : Achieve and maintain a data accuracy rate of at least 99% for all business-critical dashboards by start of day (accounting for corrections and job failures), with a 24-business hour detection of error and 5-day correction SLA. 95% of data on dashboards originates from technical data pipelines to mitigate data drift. Set up strategic dashboards based on Business Needs which are robust, scalable, easy and quick to operate and maintain. Reduce costs of data warehousing and pipelines by 30%, then maintaining costs as data needs grow. Achieve 50 eNPS on data services (e.g. dashboards) from key business : Data Pipeline Development : Design, implement, and optimize robust, scalable ETL/ELT pipelines to process large volumes of structured and unstructured data. Data Modeling : Develop and maintain conceptual, logical, and physical data models to support analytics and reporting requirements. Infrastructure Management : Architect, deploy, and maintain cloud-based data platforms (e.g. , AWS, GCP). Collaboration : Work closely with data analysts, business owners, and stakeholders to understand data requirements and deliver reliable solutions, including designing and implementing robust, efficient and scalable data visualization on Tableau or LookerStudio. Data Governance : Ensure data quality, consistency, and security through robust validation and monitoring frameworks. Performance Optimization : Monitor, troubleshoot, and optimize the performance of data systems and pipelines. Innovation : Stay up to date with the latest industry trends and emerging technologies to continuously improve data engineering & Qualifications : Experience : 5+ years in data engineering, software engineering, or a related field. Technical Proficiency Proficiency in working with relational databases (e.g. , PostgreSQL, MySQL) and NoSQL databases (e.g. , MongoDB, Cassandra). Familiarity with big data frameworks like Hadoop, Hive, Spark, Airflow, BigQuery, etc. Strong expertise in programming languages such as Python, NodeJS, SQL etc. Cloud Platforms : Advanced knowledge of cloud platforms (AWS, or GCP) and their associated data services. Data Warehousing : Expertise in modern data warehouses like BigQuery, Snowflake or Redshift, etc. Tools & Frameworks : Expertise in version control systems (e.g. , Git), CI/CD, JIRA pipelines. Big Data Ecosystems / BI : BigQuery, Tableau, LookerStudio. Industry Domain Knowledge : Google Analytics (GA), Hubspot, Accounting/Compliance etc. Soft Skills : Excellent problem-solving abilities, attention to detail, and strong communication Qualifications : Degree in Computer Science, Engineering, or a related field. Experience with real-time data streaming technologies (e.g. , Kafka, Kinesis). Familiarity with machine learning pipelines and tools. Knowledge of data security best practices and regulatory The Interview Process : The successful candidate will participate in the below interview stages (note that the order might be different to what you read below). We anticipate the process to last no more than 3 weeks from start to finish. Whether the interviews are held over video call or in person will depend on your location and the role. Case study. A : 60 minute chat with the Data Analyst, where they will give you some real-life challenges that this role faces, and will ask for your approach to solving them. Career deep dive. A : 60 minute chat with the Hiring Manager (COO). They'll discuss your last 1-2 roles to understand your experience in more detail. Behavioural fit assessment. A : 60 minute chat with our Head of HR or Head of Hiring, where they will dive into some of your recent work situations to understand how you think and work. Offer + reference interviews. We'll Make a Non-binding Offer Verbally Or Over Email, Followed By a Couple Of Short Phone Or Video Calls With References That You Provide To For Background Screening Please be aware that Sleek is a regulated entity and as such is required to perform different levels of background checks on staff depending on their role. This may include using external vendors to verify the below : Your education. Any criminal history. Any political exposure. Any bankruptcy or adverse credit history. We will ask for your consent before conducting these checks. Depending on your role at Sleek, an adverse result on one of these checks may prohibit you from passing probation. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

2.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Role & Responsibilities Minimum 2 years of experience in Java and related technologies Must have : Core Java, Spring, Spring boot, API and SQL. Good to have : Kafka, Angular, React. Excellent verbal and written communications skills Solid experience working with clients directly. Strong understanding of data structures, algorithm, object-oriented design and design patterns. Solid understanding and experience with agile software development Notes : Looking only for developers who are strong at and enjoy programming. Presence in office all 5 days of the week is mandatory (except in case of genuine : Engineering Grad / Process : Candidates should expect 3 rounds of personal or telephonic interviews to assess : Compensation will be competitive according to industry standards. The opportunity is now! If you are interested in being part of a dynamic team, serve clients across industry domains, learn the latest technologies, and reach your full potential (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

10.0 - 15.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Linkedin logo

Job Title : Director AI Automation & Data Sciences Experience Required : 10- 15 Years Industry : Legal Technology / Cybersecurity / Data Science Department : Technology & Innovation About The Role We are seeking an exceptional Director AI Automation & Data Sciences to lead the innovation engine behind our Managed Document Review and Cyber Incident Response services. This is a senior leadership role where youll leverage advanced AI and data science to drive automation, scalability, and differentiation in service delivery. If you are a visionary leader who thrives at the intersection of technology and operations, this is your opportunity to make a global impact. Why Join Us Cutting-edge AI & Data Science technologies at your fingertips Globally recognized Cyber Incident Response Team Prestigious clientele of Fortune 500 companies and industry leaders Award-winning, inspirational workspaces Transparent, inclusive, and growth-driven culture Industry-best compensation that recognizes excellence Key Responsibilities (KRAs) Lead and scale AI & data science initiatives across Document Review and Incident Response programs Architect intelligent automation workflows to streamline legal review, anomaly detection, and threat analytics Drive end-to-end deployment of ML and NLP models into production environments Identify and implement AI use cases that deliver measurable business outcomes Collaborate with cross-functional teams including Legal Tech, Cybersecurity, Product, and Engineering Manage and mentor a high-performing team of data scientists, ML engineers, and automation specialists Evaluate and integrate third-party AI platforms and open-source tools for accelerated innovation Ensure AI models comply with privacy, compliance, and ethical AI principles Define and monitor key metrics to track model performance and automation ROI Stay abreast of emerging trends in generative AI, LLMs, and cybersecurity analytics Technical Skills & Tools Proficiency in Python, R, or Scala for data science and automation scripting Expertise in Machine Learning, Deep Learning, and NLP techniques Hands-on experience with LLMs, Transformer models, and Vector Databases Strong knowledge of Data Engineering pipelines ETL, data lakes, and real-time analytics Familiarity with Cyber Threat Intelligence, anomaly detection, and event correlation Experience with platforms like AWS SageMaker, Azure ML, Databricks, HuggingFace Advanced use of TensorFlow, PyTorch, spaCy, Scikit-learn, or similar frameworks Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for ML Ops Strong command of SQL, NoSQL, and big data tools (Spark, Kafka) Qualifications Bachelors or Masters in Computer Science, Data Science, AI, or a related field 10- 15 years of progressive experience in AI, Data Science, or Automation Proven leadership of cross-functional technology teams in high-growth environments Experience working in LegalTech, Cybersecurity, or related high-compliance industries preferred (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Java Full Stack Developer is responsible for establishing and implementing new or revised application systems and programs in coordination with the Technology team. Responsibilities: Work in an agile environment following through the best practices of agile Scrum. Analyze the requirements, seek clarifications, contribute to good acceptance criteria, estimate, and be committed. Take pride in designing solutions, developing the code free from defects and vulnerabilities, meeting functional and non-functional requirements by following modern engineering practices, reducing rework, continuously addressing technical debt. Contribute to overall team performance by helping others, peer reviewing the code diligently. Bring agility to application development through DevOps practices - automated builds, unit/functional tests, static/dynamic scans, regression tests etc. Lookout for providing best possible customer support by troubleshooting, resolving production incidents and by eliminating the problems from the root level. Bring innovative solutions to reduce the operational risks by automating mundane repetitive tasks across SDLC. Learn to become full stack developer to address end-to-end delivery of user stories. Qualifications: 2+ years of professional experience as Full Stack software engineering experience in developing enterprise scale applications. Expertise in building web applications using Java, Angular/React, and Oracle/PostgreSQL technology stack. Expertise in enterprise integrations through RESTful APIs, Kafka messaging etc. Expertise in Elastic Search, NoSQL databases, and Caching solutions. Expertise in designing and optimizing the software solutions for performance and stability. Expertise in troubleshooting and problem solving. Expertise in Test driven development. Expertise in Authentication, Authorization, and Security. Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

Job Description : Java/J2EE Developer Location : Jaipur, Rajasthan, India (Work From Office) Job Summary We are seeking a highly skilled and motivated Senior Java/J2EE Developer with a strong foundation in core Java and J2EE technologies. The ideal candidate will have proven expertise in designing and developing robust, scalable, and highly available applications from the ground up. This role requires a deep understanding of architectural patterns, a strong algorithmic thought process, and hands-on experience in delivering solutions across diverse deployment environments, including traditional data centers and cloud platforms. The candidate should be able to work independently, possess excellent problem-solving skills, and have a passion for learning and adopting new technologies. Responsibilities Design, develop, and implement high-performance, scalable, and secure Java/J2EE applications and microservices. Write clean, well-documented, and efficient code adhering to best practices and coding standards. Participate in all phases of the software development lifecycle, including requirements gathering, design, development, testing, deployment, and maintenance. Contribute to the application and core design, making critical architectural decisions. Apply sound algorithmic thinking to solve complex technical challenges. Develop and integrate with relational databases (e.g, MySQL, MSSQL, Oracle) and NoSQL databases (e.g, MongoDB). Implement and consume Web Services (SOAP/RESTful). Work with messaging systems like JMS, RabbitMQ, or Kafka. Ensure the performance, scalability, and availability of applications deployed across various environments (traditional data centers, public clouds like AWS, Azure, Google Cloud, and private clouds). Implement security best practices in application design and development. Troubleshoot and resolve complex technical issues, including performance bottlenecks and scalability challenges. Collaborate effectively with cross-functional teams, including product managers, QA engineers, and DevOps engineers. Contribute to the continuous improvement of development processes and methodologies. Stay up-to-date with the latest technology trends and proactively suggest adoption where beneficial. Work independently with minimal supervision and take ownership of assigned tasks. Contribute to and adhere to microservices design principles and best practices. Utilize and integrate with CI/CD pipelines (e.g, Jenkins, Bitrise, CircleCI, TravisCI). Understand and work with Content Delivery Networks (CDNs) like CloudFront, Akamai, and Cloudflare. Apply strong analytical and problem-solving skills to identify and resolve technical issues. Must Have Skills & Experience Core Java : Deep understanding of core Java concepts, including data structures, algorithms, multithreading, concurrency, and garbage collection. J2EE : Extensive experience with J2EE technologies and frameworks, including Servlets, JSP, EJBs (preferably stateless), and related APIs. Spring Framework : Strong proficiency in the Spring ecosystem, including Spring Core, Spring MVC, Spring Boot, Spring Security, and Spring Data JPA/Hibernate. Hibernate/JPA : Solid experience with object-relational mapping (ORM) frameworks like Hibernate and JPA. Messaging Systems : Hands-on experience with at least one of the following messaging systems: JMS, RabbitMQ, or Kafka. Web Services : Proven ability to design, develop, and consume Web Services (RESTful and/or SOAP). Databases : Strong working knowledge of relational databases such as MySQL, MSSQL, and Oracle, including SQL query optimization. Design Patterns : In-depth understanding and practical application of various design patterns (e.g, creational, structural, behavioral). NoSQL Databases : Familiarity with NoSQL databases like MongoDB and their use cases. Microservices Architecture : Knowledge and practical experience in designing, developing, and deploying microservices. Security Design : Understanding of security principles and best practices in application development, including authentication, authorization, and data protection. Cloud Platforms : Sound knowledge of at least one major cloud platform (AWS, Azure, or Google Cloud) and its services. CDNs : Familiarity with Content Delivery Networks (e.g, CloudFront, Akamai, Cloudflare) and their integration. Problem Solving & Analytics : Excellent analytical and problem-solving skills with a strong aptitude for identifying and resolving complex technical issues, particularly related to performance and scalability. CI/CD : Experience working with Continuous Integration/Continuous Delivery platforms like Jenkins, Bitrise, CircleCI, TravisCI. Networking Protocols : Excellent understanding of standard internet protocols such as HTTP/HTTPS, DNS, SSL/TLS. Independent Work : Demonstrated ability to work independently, manage tasks effectively, and take ownership of deliverables. Learning Agility : A strong passion for learning new technologies and proactively upgrading existing technology versions. Good To Have Skills Experience with containerization technologies like Docker and orchestration tools like Kubernetes. Knowledge of front-end technologies like HTML, CSS, JavaScript, and related frameworks (e.g, React, Angular, Vue.js). Experience with performance monitoring and logging tools (e.g, Prometheus, Grafana, ELK stack). Familiarity with Agile development methodologies. Experience with testing frameworks (e.g, JUnit, TestNG, Mockito). Education And Experience Bachelor's degree in Computer Science or a related field (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

Job Summary We are seeking a highly skilled and motivated Java Developer with a strong foundation in Core Java and J2EE to join our dynamic team in Jaipur. The ideal candidate will possess hands-on experience in designing and developing robust and scalable applications from the ground up. You should have a proven track record of delivering highly available services across various technology stacks, including traditional data centers and cloud environments. This role requires a strong problem-solving aptitude, excellent analytical skills, and the ability to work independently while contributing effectively within a team. You will be involved in the full software development lifecycle, from design and implementation to testing and : Design, develop, and implement high-performance and scalable Java/J2EE applications and microservices. Write clean, well-documented, and efficient code following best practices and coding standards. Participate in the entire application lifecycle, including requirements analysis, design, development, testing, deployment, and maintenance. Design and implement robust and secure APIs and web services (RESTful/SOAP). Work with relational databases (e.g., MySQL, MSSQL, Oracle, PostgreSQL) and NoSQL databases (e.g., MongoDB) to design and optimize data models and queries. Apply design patterns and architectural best practices to ensure maintainability, scalability, and reliability of applications. Develop and implement solutions for delivering highly available services on traditional data centers, public clouds (AWS, Azure, Google Cloud), and private clouds. Implement security best practices in application development and deployment. Troubleshoot and resolve complex technical issues related to performance, scalability, and stability. Collaborate effectively with cross-functional teams, including product managers, designers, and QA engineers. Contribute to the continuous improvement of development processes and tools. Stay up-to-date with the latest Java technologies, frameworks, and industry trends. Participate in code reviews to ensure code quality and adherence to standards. Work with CI/CD pipelines (e.g., Jenkins, Bitrise, CircleCI, TravisCI) for automated build, test, and deployment processes. Understand and implement standard protocols such as HTTP/HTTPS, DNS, SSL, etc. Demonstrate a passion for learning new technologies and proactively upgrading existing technology stacks. Must Have Skills Core Java : Strong fundamentals and in-depth understanding of Core Java concepts (OOPs, data structures, algorithms, multithreading, concurrency, collections). J2EE : Proven experience with J2EE technologies and frameworks (Servlets, JSP, EJB (good to have), etc.). Spring Framework : Extensive experience with the Spring ecosystem (Spring Core, Spring MVC, Spring Boot, Spring Security, Spring Data JPA/Hibernate). Hibernate/JPA : Solid understanding and practical experience with object-relational mapping frameworks. Messaging Systems : Hands-on experience with at least one of the following : JMS, RabbitMQ, or Kafka. Web Services : Strong experience in developing and consuming RESTful and/or SOAP web services. Databases : Proficient in working with relational databases (MySQL, MSSQL, Oracle) and writing complex SQL queries. Design Patterns : Strong understanding and practical application of various design patterns (creational, structural, behavioral). Database Knowledge : In-depth knowledge of relational database design principles and NoSQL database concepts. Microservices : Ability to work independently and possess a strong understanding and experience in Microservices architecture, design principles, and security considerations. Problem-Solving : Excellent analytical and problem-solving skills with a strong aptitude for identifying and resolving technical challenges. Cloud Platforms : Sound knowledge and practical experience with at least one major cloud platform (AWS, Azure, Google Cloud). CDNs : Sound understanding of Content Delivery Networks (CloudFront, Akamai, Cloudflare) and their integration. Performance and Scalability : Strong problem-solving and analytical skills specifically related to performance optimization and ensuring scalability of applications built on the mentioned technologies. CI/CD : Experience working with Continuous Integration/Continuous Delivery platforms (Jenkins, Bitrise, CircleCI, TravisCI, etc.). Networking Protocols : Excellent understanding of standard internet protocols such as HTTP/HTTPS, DNS, SSL, etc. Learning Agility : Passion and eagerness to learn new technologies and adapt to evolving technology to Have Skills : Experience with containerization technologies like Docker and orchestration tools like Kubernetes. Familiarity with front-end technologies like HTML, CSS, JavaScript, and modern JavaScript frameworks (React, Angular, Vue.js). Experience with testing frameworks (JUnit, Mockito, TestNG). Knowledge of security best practices and common security vulnerabilities (OWASP). Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Familiarity with Agile development methodologies (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Greater Lucknow Area

On-site

Linkedin logo

Job Position : Python Developer - Kafka Experience : 7+ Yrs Location : Anywhere India NP : Immediate - 30 days Skills : Python, Fast API, LLM, Sql, Kafka, Mongo DB Location : Bangalore/Any UST Job Description We are seeking an experienced Python Developer with a strong background in web development and GenAI technologies. The ideal candidate will have a minimum of five years of experience in Python development, including working with frameworks like Flask or FastAPI, and integrating AI-driven features into : Design, develop, and maintain scalable web applications using Python and frameworks such as Flask or FastAPI, with a focus on AI-powered e-commerce solutions. Develop and deploy RESTful APIs to integrate GenAI models into the e-commerce platform. Implement and optimize GenAI models and frameworks to enhance application functionality and performance. Identify, troubleshoot, and resolve technical issues to ensure seamless application performance. Collaborate with cross-functional teams (engineering, product, design) to define, develop, and ship new features. Requirements 5+ years of hands-on experience in Python development, with expertise in Flask or FastAPI. Proven experience with Large Language Models (LLMs) and Natural Language Processing (NLP) libraries. Strong working knowledge of web frameworks like Flask/FastAPI and their integration with AI technologies. Practical experience with vector databases and Retrieval-Augmented Generation (RAG) frameworks. Proficiency in working with SQL/NoSQL databases and caching mechanisms. In-depth understanding of API security principles and microservices architecture. Experience with cloud platforms (AWS preferred) and CI/CD pipelines (e.g., GitHub Actions, Jenkins). Familiarity with version control systems, particularly Git. Strong problem-solving abilities, with the capacity to work independently and collaboratively. Excellent communication skills with the ability to explain complex technical concepts to non-technical stakeholders. (ref:hirist.tech) Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 5+ years Extensive experience in back-end development utilizing Java 8 or higher, Spring Framework (Core/Boot/MVC), Hibernate/JPA, and Microservices Architecture. Strong working experience in front-end applications using technologies such as TypeScript, JavaScript, React, and micro frontends. Experience with Elastic Search, MongoDB and messaging systems like Kafka. Hands-on experience with REST APIs, Caching system (e.g Redis) and messaging systems like Kafka etc. Proficiency in Service-Oriented Architecture (SOA) and Web Services (Apache CXF, JAX-WS, JAX-RS, SOAP, REST). Hands-on experience with multithreading, and cloud development. Strong working experience in Data Structures and Algorithms, Unit Testing, and Object-Oriented Programming (OOP) principles. Hands-on experience with relational databases such as SQL Server, Oracle, MySQL, and PostgreSQL. Experience with DevOps tools and technologies such as Ansible, Docker, Kubernetes, Puppet, Jenkins, and Chef. Proficiency in build automation tools like Maven, Ant, and Gradle. Hands on experience on cloud technologies such as AWS/ Azure. Strong understanding of UML and design patterns. Ability to simplify solutions, optimize processes, and efficiently resolve escalated issues. Strong problem-solving skills and a passion for continuous improvement. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. Enthusiasm for learning new technologies and staying updated on industry trends RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the client’s needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving of complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description At Blend, we are award-winning experts who transform businesses by delivering valuable insights that make a difference. From crafting a data strategy that focuses resources on what will make the biggest difference to your company, to standing up infrastructure, and turning raw data into value through data science and visualization: we do it all. We believe that data that doesn't drive value is lost opportunity, and we are passionate about helping our clients drive better outcome through applied analytics. We are obsessed with delivering world class solutions to our customers through our network of industry leading partners. If this sounds like your kind of challenge, we would love to hear from you. For more information, visit www.blend360.com Job Description We are looking for someone who is ready for the next step in their career and is excited by the idea of solving problems and designing best in class. However, they also need to be aware of the practicalities of making a difference in the real world – whilst we love innovative advanced solutions, we also believe that sometimes a simple solution can have the most impact. Our AI Engineer is someone who feels the most comfortable around solving problems, answering questions and proposing solutions. We place a high value on the ability to communicate and translate complex analytical thinking into non-technical and commercially oriented concepts, and experience working on difficult projects and/or with demanding stakeholders is always appreciated. What can you expect from the role? Contribute to design, develop, deploy and maintain AI solutions Use a variety of AI Engineering tools and methods to deliver Own parts of projects end-to-end Contributing to solutions design and proposal submissions Supporting the development of the AI engineering team within Blend Maintain in-depth knowledge of the AI ecosystems and trends Mentor junior colleagues Qualifications Contribute to the design, development, testing, deployment, maintenance, and improvement of robust, scalable, and reliable software systems, adhering to best practices. Apply Python programming skills for both software development and AI/ML tasks. Utilize analytical and problem-solving skills to debug complex software, infrastructure, and AI integration issues. Proficiently use version control systems, especially Git and ML/LLMOps model versioning protocols. Assist in analysing complex or ambiguous AI problems, breaking them down into manageable tasks, and contributing to conceptual solution design within the rapidly evolving field of generative AI. Work effectively within a standard software development lifecycle (e.g., Agile, Scrum). Contribute to the design and utilization of scalable systems using cloud services (AWS, Azure, GCP), including compute, storage, and ML/AI services. (Preferred: Azure) Participate in designing and building scalable and reliable infrastructure to support AI inference workloads, including implementing APIs, microservices, and orchestration layers. Contribute to the design, building, or working with event-driven architectures and relevant technologies (e.g., Kafka, RabbitMQ, cloud event services) for asynchronous processing and system integration. Experience with containerization (e.g., Docker) and orchestration tools (e.g., Kubernetes, Airflow, Kubeflow, Databricks Jobs, etc). Assist in implementing CI/CD pipelines and optionally using IaC principles/tools for deploying and managing infrastructure and ML/LLM models. Contribute to developing and deploying LLM-powered features into production systems, translating experimental outputs into robust services with clear APIs. Demonstrate familiarity with transformer model architectures and a practical understanding of LLM specifics like context handling. Assist in designing, implementing, and optimising prompt strategies (e.g., chaining, templates, dynamic inputs); practical understanding of output post-processing. Experience integrating with third-party LLM providers, managing API usage, rate limits, token efficiency, and applying best practices for versioning, retries, and failover. Contribute to coordinating multi-step AI workflows, potentially involving multiple models or services, and optimising for latency and cost (sequential vs. parallel execution). Assist in monitoring, evaluating, and optimising AI/LLM solutions for performance (latency, throughput, reliability), accuracy, and cost in production environments. Additional Information Experience specifically with the Databricks MLOps platform. Familiarity with fine-tuning classical LLM models. Experience ensuring security and observability for AI services. Contribution to relevant open-source projects. Familiarity with building agentic GenAI modules or systems. Have hands-on experience implementing and automating MLOps/LLMOps practices, including model tracking, versioning, deployment, monitoring (latency, cost, throughput, reliability), logging, and retraining workflows. Experience working with MLOps/experiment tracking and operational tools (e.g., MLflow, Weights & Biases). Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About the company: Avenue Code is the leading software consultancy focused on delivering end-to-end development solutions for digital transformation across every vertical. We’re privately held, profitable, and have been on a solid growth trajectory since day one. We care deeply about our clients, our partners, and our people. We prefer the word ‘partner’ over ‘vendor’, and our investment in professional relationships is a reflection of that philosophy. We pride ourselves on our technical acumen, our collaborative problem-solving ability, and the warm professionalism of our teams. Avenue Code has been believing in and promoting plurality actions for over 10 years, understanding that recognizing differences and fostering a safe environment, employment opportunities, representation, and support are the best ways to promote an increasingly equitable culture. About the opportunity: Onsite Position at HDC – India Hyderabad, Telangana This is not a hybrid role – candidates are expected to work from the office 4 days a week. Responsibilities: 8+ years software development experience with high volume e-commerce or online retail services, 5 years of which are specific to front-end and integration technologies Demonstrable proficiency and experience in NodeJS-based technologies and/or Java, microservices and integration technologies like Kafka Exposure to API management (via Apigee or Mulesoft), Identity and Access Management technologies (like Ping Federate, OAuth and OpenID Connect) Experience with running workloads on Public Clouds such as AWS, Azure or GCP, and experience with container-based technologies like Docker and Cloud Foundry Prior experience working with Continuous Integration and Deployment in a DevOps oriented product development environment and familiarity with modern MML technologies like Splunk, New Relic and Pager Duty Well versed with system and technical design principles, performant coding practices ensuring security requirements are not compromised for functionality and/or performance. Experience with laying out a go-live plan at the conceptual stage, analyzing the pros and cons between multiple options. Experience with Content Management Systems and/or Personalization Systems is a major plus. Experience with Digital asset Management systems and personalization Systems is a major plus. SKILLS: – Knowledge of Oracle/ Microsoft Dynamics – Programming skills (e.g., Java, C#, Python) – Experience with enterprise application integration (EAI) – Knowledge of business processes and workflows – Effective communication and teamwork skills Avenue Code reinforces its commitment to privacy and to all the principles guaranteed by the most accurate global data protection laws, such as GDPR, LGPD, CCPA and CPRA. The Candidate data shared with Avenue Code will be kept confidential and will not be transmitted to disinterested third parties, nor will it be used for purposes other than the application for open positions. As a Consultancy company, Avenue Code may share your information with its clients and other Companies from the CompassUol Group to which Avenue Code’s consultants are allocated to perform its services. Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Key Responsibilities Set up and maintain monitoring dashboards for ETL jobs using Datadog, including metrics, logs, and alerts. Monitor daily ETL workflows and proactively detect and resolve data pipeline failures or performance issues. Create Datadog Monitors for job status (success/failure), job duration, resource utilization, and error trends. Work closely with Data Engineering teams to onboard new pipelines and ensure observability best practices. Integrate Datadog with tools. Conduct root cause analysis of ETL failures and performance bottlenecks. Tune thresholds, baselines, and anomaly detection settings in Datadog to reduce false positives. Document incident handling procedures and contribute to improving overall ETL monitoring maturity. Participate in on call rotations or scheduled support windows to manage ETL health. Required Skills & Qualifications 3+ years of experience in ETL/data pipeline monitoring, preferably in a cloud or hybrid environment. Proficiency in using Datadog for metrics, logging, alerting, and dashboards. Strong understanding of ETL concepts and tools (e.g., Airflow, Informatica, Talend, AWS Glue, or dbt). Familiarity with SQL and querying large datasets. Experience working with Python, Shell scripting, or Bash for automation and log parsing. Understanding of cloud platforms (AWS/GCP/Azure) and services like S3, Redshift, BigQuery, etc. Knowledge of CI/CD and DevOps principles related to data infrastructure monitoring. Preferred Qualifications Experience with distributed tracing and APM in Datadog. Prior experience monitoring Spark, Kafka, or streaming pipelines. Familiarity with ticketing tools (e.g., Jira, ServiceNow) and incident management workflows. Show more Show less

Posted 3 days ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position Overview Job Title: Technology Service Analyst, AS Location: Pune, India Corporate Title: AS Role Description At the heart of Deutsche Bank's client franchise, is the Corporate Bank (CB), a market leader in Cash Management, Trade Finance & Lending, Securities Services and Trust & Agency services. Focusing on the Treasurers and Finance Departments of Corporate and Commercial clients and Financial Institutions across the Globe, our Universal Expertise and Global Network allows us to offer truly integrated and effective solutions. You will be operating within Corporate Bank Production as a Production Support Engineer in Payments domain. Payments Production domain is a part of Cash Management under Deutsche Bank Corporate Banking division which supports mission critical payments processing and FX platforms for multiple business lines like High Value/Low value / Bulk / Instant / Cheques payments. Team provides 24x7 support and follows ‘follow the sun’ model to provide exceptional and timebound services to the clients. Our objective at Corporate Bank Production is to consistently strive to make production better which ensures promising End To End experience for our Corporate Clients running their daily Cash Management Business through various access channels. We also implement, encourage, and invest in building Engineering culture in our daily activities to achieve the wider objectives. Our strategy leads to attain reduced number of issues, provide faster resolution on issues, and safeguard any changes being made on our production environment, across all domains at Corporate Bank. You will be accountable to drive a culture of proactive continual improvement into the Production environment through application, user request support, troubleshooting and resolving the errors in production environment. Automation of manual work, monitoring improvements and platform hygiene. Supporting the resolution of issues and conflicts and preparing reports and meetings. Candidate should have experience in all relevant tools used in the Service Operations environment and has specialist expertise in one or more technical domains and ensures that all associated Service Operations stakeholders are provided with an optimum level of service in line with Service Level Agreements (SLAs) / Operating Level Agreements (OLAs). What We’ll Offer You As part of our flexible scheme, here are just some of the benefits that you’ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your Key Responsibilities Acting as a Production Support Analyst for the CB production team providing second level of support for the applications under the tribe working with key stakeholders and team members across the globe in 365 days, 24/7 working model As an individual contributor and prime liaison for the application suite into the incident, problem, change, release, capacity, and continuous improvement. Escalation, Management, and communication of major production incidents Liaising with development teams on new application handover and 3rd line escalation of issues Application rollout activities (may include some weekend activities) Manage SLO for Faster Resolution and Fewer Incident for the Production Application Stability Develop a Continuous Service Improvement approach to resolve IT failings, drive efficiencies and remove repetition to streamline support activities, reduce risk, and improve system availability by understanding emerging trends and proactively addressing them. Carry out technical analysis of the Production platform to identify and remediate performance and resiliency issues. Update the RUN Book and KEDB as and when required. Your Skills And Experience Good experience in Production Application Support and ITIL Practices Very good hands-on knowledge of databases (Oracle/PLSQL etc.), including working experience of writing SQL scripts and queries. Very Good hands-on experience on UNIX/Linux, Solaris, Java J2EE, Python, PowerShell scripts, tools for automation (RPA, Workload, Batch) Exposure in Kaka, Kubernetes and microservices is added advantage. Experience in application performance monitoring tools – Geneos, Splunk, Grafana & New Relic, Scheduling Tools (Control-M) Excellent Team player and People Management experience is an advantage. Bachelor's degree. Master's degree a plus. Previous relevant experience in Banking Domain 6+ years’ experience in IT in large corporate environments, specifically in the production support. Operating systems (e.g. UNIX, Windows) Understanding on environments Middleware (e.g.MQ, WebLogic, Tomcat, Jboss, Apache, Kafka etc ) Database environments (e.g. Oracle, MS-SQL, Sybase, No SQL) Experience in APM Tools like Splunk & Geneos; Control-M /Autosys; App dynamics. Nice to have: Cloud services: GCP Exposure on Payments domain fundamentals & SWIFT message types Knowledge in Udeploy, Bit bucket Skills That Will Help You Excel Self-motivated with excellent interpersonal, presentation, and communication skills. Able to think strategically with strong analytical and problem-solving skills. Able to handle multiple demands and priorities simultaneously, work under pressure, in an organized manner and with teams across multiple locations and time-zones. Able to connect, manage & influence people from different backgrounds and cultures. A strong team player being part of a global team, communicating, managing, and cooperating closely on a global level while being able to take ownership and deliver independently. How We’ll Support You Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About Us And Our Teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment. Show more Show less

Posted 3 days ago

Apply

9.0 - 14.0 years

19 - 34 Lacs

Hyderabad

Work from Office

Naukri logo

Job Description Summary Software Engineer will lead technical development and delivery at Team and sometimes Lab level. They can be a Line Manager who will act as co-lead with the Team PO for overall delivery. In these cases, they will be responsible for overall tech delivery, line management & app ownership alongside their own software engineering output.. Others will operate as Individual Contributors, who are specialists in particular technology areas and will be narrower and deeper in focus. Job Description Grows own capabilities by pursuing and investing in personal development opportunities and develops the capabilities of direct reports by working within existing development framework; provides specialised training or coaching in area of expertise to others throughout the organisation. Identifies shortcomings and suggests improvements to existing processes, systems and procedures, then delivers a plan for a small element of a change management programme with guidance from a project/programme manager. Highlights shortcomings and suggests improvements in current IT Security processes, systems and procedures within assigned unit and/or discipline. Delivers prescribed outcomes for area of responsibility by working within established knowledge management systems. Delivers outcomes by managing others and working within established systems. Sets short term objectives and helps manage the performance of direct reports by working within performance management systems. Sets short term objectives and helps manage the performance of direct reports by working within performance management systems. Explores issues and/or needs to establish potential causes, related issues and barriers. Defines, delivers, and adapts specialized products/services to meet customer needs by selecting the best possible approaches available within established systems. Delivers prescribed outcomes for a designated area, using risk management systems to ensure the organisation is not exposed to undue risks. Analyses specified problems and issues to find the best technical and/or professional solutions. Develops product specifications while designing testing procedures and standards. Delivers prescribed outcomes for area of responsibility by working within established strategic planning systems.

Posted 3 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Razorpay was founded by Shashank Kumar and Harshil Mathur in 2014. Razorpay is building a new-age digital banking hub (Neobank) for businesses in India with the mission is to enable frictionless banking and payments experiences for businesses of all shapes and sizes. What started as a B2B payments company is processing billions of dollars of payments for lakhs of businesses across India. We are a full-stack financial services organisation, committed to helping Indian businesses with comprehensive and innovative payment and business banking solutions built over robust technology to address the entire length and breadth of the payment and banking journey for any business. Over the past year, we've disbursed loans worth millions of dollars in loans to thousands of businesses. In parallel, Razorpay is reimagining how businesses manage money by simplifying business banking (via Razorpay X) and enabling capital availability for businesses (via Razorpay Capital). DataSync DataSync is Razorpay’s cutting-edge real-time data streaming product that enables businesses to securely and seamlessly access their Razorpay payments, settlements, and transaction data directly in their own data infrastructure, such as Snowflake, Redshift, or Kafka. With DataSync, enterprises can eliminate manual data handling, accelerate reconciliation, enable real-time analytics, and derive deeper insights for financial and operational decision-making. The Role Razorpay is looking for a highly motivated and skilled Enterprise Sales person. This position is an integral part of our sales engine. This role within the DataSync team is responsible for identifying and creating new qualified opportunities within target Enterprise accounts. Candidates will need to be able to articulate the DataSync value proposition. He/She/They should be able to come up with new outbound ideas and techniques in the enterprise market. Roles And Responsibilities Develop an in-depth knowledge of Razorpay core products via successful completion of required sales. Training program, and utilize this knowledge to successfully lead the regional sales function. Developing strategy, tactics, sales plans and profit targets. Developing and maintaining a relationship with the clients. Identifying and reporting on business opportunities in target regional markets. Representing Razorpay at conferences and networking events. Work effectively with internal support departments (Sales, Marketing and Product Development) to develop effective sales strategies that promote sales to new and existing customers. Continuously research and remain knowledgeable of industry trends and competition Mandatory Qualifications Experience of 4+ years in sales roles preferably within the B2B Enterprise level and worked as an individual contributor. Excellent written and verbal communication skills. Proven sales experience in specified territories and verticals. Able to drive customer centricity in the team. Quick learner, adaptable to changing business needs. The role involves extensive domestic traveling and hence candidates need to be prepared for the same. Razorpay believes in and follows an equal employment opportunity policy that doesn't discriminate on gender, religion, sexual orientation, colour, nationality, age, etc. We welcome interests and applications from all groups and communities across the globe. Follow us on LinkedIn & Twitter Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role Description Role Proficiency: Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities Outcomes Interpret the application/feature/component design to develop the same in accordance with specifications. Code debug test document and communicate product/component/feature development stages. Validate results with user representatives; integrates and commissions the overall solution Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions Optimises efficiency cost and quality. Influence and improve customer satisfaction Set FAST goals for self/team; provide feedback to FAST goals of team members Measures Of Outcomes Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues On time completion of mandatory compliance trainings Code Outputs Expected: Code as per design Follow coding standards templates and checklists Review code – for team and peers Documentation Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation r and requirements test cases/results Configure Define and govern configuration management plan Ensure compliance from the team Test Review and create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain Relevance Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client. Learn more about the customer domain identifying opportunities to provide valuable addition to customers Complete relevant domain certifications Manage Project Manage delivery of modules and/or manage user stories Manage Defects Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate Create and provide input for effort estimation for projects Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release Execute and monitor release process Design Contribute to creation of design (HLD LLD SAD)/architecture for Applications/Features/Business Components/Data Models Interface With Customer Clarify requirements and provide guidance to development team Present design options to customers Conduct product demos Manage Team Set FAST goals and provide feedback Understand aspirations of team members and provide guidance opportunities etc Ensure team is engaged in project Certifications Take relevant domain/technology certification Skill Examples Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort required for developing / debugging features / components Perform and evaluate test in the customer or target environment Make quick decisions on technical/project related challenges Manage a Team mentor and handle people related issues in team Maintain high motivation levels and positive dynamics in the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers addressing customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with quality. Estimate time and effort resources required for developing / debugging features / components Make on appropriate utilization of Software / Hardware’s. Strong analytical and problem-solving abilities Knowledge Examples Appropriate software programs / modules Functional and technical designing Programming languages – proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile – Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Knowledge of customer domain and deep understanding of sub domain where problem is solved Additional Comments Python Developer: 5+ years of work experience using Python and AWS for developing enterprise software applications Experience in Apache Kafka, including topic creation, message optimization, and efficient message processing Skilled in Docker and container orchestration tools such as Amazon EKS or ECS Proven experience designing and developing microservices and RESTful APIs using Spring Boot Strong experience managing AWS components, including Lambda (Java), API Gateway, RDS, EC2, CloudWatch Experience working in an automated DevOps environment, using tools like Jenkins, SonarQube, Nexus, and Terraform for deployments Hands-on experience with Java-based web services, RESTful approaches, ORM technologies, and SQL procedures in Java. Experience with Git for code versioning and commit management Experience working in Agile teams with a strong focus on collaboration and iterative development Ability to implement changes following standard turnover procedures, with a CI/CD focus Bachelors or Masters degree in computer science, Information Systems or equivalent Skills Python,AWS,ECS Show more Show less

Posted 3 days ago

Apply

6.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 10 The Role: Senior Quality Engineer The Team Quality Engineering team works in partnership with other functions in Technology & the business to deliver quality products by providing software testing services and quality assurance, that continuously improve our customer’s ability to succeed. The team is independent in driving all decisions and is responsible for the architecture, design and quick turnaround in development of our products with high quality. The team is located globally. The Impact You will ensure the quality of our deliverable meets and exceeds the expectations of all stakeholders and evangelize the established quality standards and processes. Your challenge will be reducing the “the time to market” for products without compromising the quality, by leveraging technology and innovation. These products are directly associated to revenue growth and operations enablement. You strive to achieve personal objectives and contribute to the achievement of team objectives, by working on problems of varying scope where analysis of situations and/or data requires a review of a variety of factors. What’s in it for you Do you love working every single day testing enterprise-scale applications that serve a large customer base with growing demand and usage? Be the part of a successful team which works on delivering top priority projects which will directly contribute to Company’s strategy. You will use a wide range of technologies and have the opportunity to interact with different teams internally. You will also get a plenty of learning and skill-building opportunities with participation in innovation projects, training and knowledge sharing. You will have the opportunity to own and drive a project end to end and collaborate with developers, business analysts and product managers who are experts in their domain which can help you to build multiple skillsets. Responsibilities: Understand application architecture, system environments (ex: shared resources, components and services, CPU, memory, storage, network, etc.) to troubleshoot production performance issues. Ability to perform scalability & capacity planning. Work with multiple product teams to design, create, execute, and analyze performance tests; and recommend performance turning. Support remediating performance bottlenecks of application front-end and database layers. Drive industry best practices in methodologies and standards of performance engineering, quality and CI/CD process. Understand user behaviors and analytics models and experience in using Kibana and Google analytics Ensure optimally performing production applications by establishing application and transaction SLAs for performance, implementing proactive application monitoring, alarming and reporting, and ensuring adherence to and measurement against defined SLA. Analyzes, designs and develops performance specifications and scripts based on workflows. Ability to interpret Network/system diagram, results of performance tests and identify improvements. Leverage tools and frameworks to develop performance scripts with quality code to simplify testing scenarios Focus on building efficient solutions for Web, Services/APIs, Database, mobile performance testing requirements. Deliver projects in the performance testing space and ensure delivery efficiency. Define testing methodologies & implement tooling best practices for continuous improvement and efficiency Understand business scenarios in depth to define workload modelling for different scenarios Compliment architecture community by providing inputs & pursue implementation suggested for optimization Competency to manage testing for highly integrated system with multiple dependencies and moving parts. Active co-operation/collaboration with the teams at various geographic locations. Provide prompt response and support in resolving critical issues (along with the development team). May require after hours/weekend work for production implementations What we’re looking for: Proficient with software development lifecycle (SDLC) and Software Testing techniques in an Agile/Scrum framework. Bachelor's/PG degree in Computer Science, Information Systems or equivalent 6-9 years of experience in Performance testing/Engineering or development with good understanding of performance testing concepts Experience in performance testing tools like Microfocus Storm Runner/ LoadRunner/Performance Center, JMeter. Protocol : Web(HTTP/HTML) , Ajax Truclient, Citrix, .Net Programming Language : Java, C#, .Net, Python Working Experience in CI/CD for performance testing. Debugging tools: Dev Tools, Network Sniffer and Fiddler etc. Experience in monitoring, profiling and tuning tools e.g. CA Wily Introscope, AppDynamics, Dynatrace, Datadog, Splunk etc. Experience in Databases / SQLs (e.g. SQl Server, Cassandra, Elastic Search, Postgres, MongoDB) Experience in message brokers (e.g. Kafka) Good knowledge in Cloud computing platforms (AWS, Azure), Containers (Docker) Web/UI Javascript frameworks (e.g. AngularJS, NodeJS, ReactJS) Experience in gathering Non-Functional Requirements (NFR) & strategy to achieve NFR and developing test plans Experience in testing and optimizing high volume web and batch-based transactional enterprise applications. Experience testing with containers, cloud, virtualization, and configuration management. Demonstrate outstanding flexibility and leadership with communication of performance test result interpretation and explanation to both IT and Business Users. Strong communication skills and ability to produce clear, concise and detailed documentation Excellent problem solving, analytical and technical troubleshooting skills. Experience in refactoring test performance suites as necessary Experience working with SOAP and REST service and understanding of SOA architecture Preferred Qualifications: Bachelor's or higher degree in technology related field. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 316147 Posted On: 2025-06-16 Location: Gurgaon, Haryana, India Show more Show less

Posted 3 days ago

Apply

5.0 - 31.0 years

0 - 1 Lacs

Nagole, Hyderabad

Remote

Apna logo

🔹 Job Title: Python Backend & Middleware Developer with Database Expertise 📍 Location: Hyderabad 🕒 Experience: 3–6 Years 🧾 Employment Type: Full-time 🔧 Key Responsibilities: 🔸 Python Backend Development: - Design, build, and maintain scalable RESTful APIs using Python (FastAPI/Django/Flask). - Write clean, efficient, and testable code. - Implement backend logic, data processing, and third-party API integrations. - Use asynchronous programming paradigms where required (e.g., asyncio, aiohttp). 🔸 Middleware Development: - Develop and maintain middleware components to handle cross-cutting concerns like logging, authentication, and request/response handling. - Ensure smooth communication between different systems, services, and microservices. - Optimize inter-service communication using message brokers (RabbitMQ, Kafka, etc.). - Implement caching and rate-limiting mechanisms where applicable. 🔸 Database Development: - Design and manage relational (PostgreSQL, MySQL) and NoSQL (MongoDB, Redis) databases. - Write complex SQL queries, stored procedures, and views for efficient data retrieval. - Ensure database normalization, indexing, performance tuning, and optimization. - Implement data backup, recovery strategies, and migration scripts. 🧠 Required Skills: - Strong proficiency in Python 3.x and experience with frameworks like FastAPI, Django, or Flask. - Experience with middleware architecture, API Gateways, or microservice orchestration. - Expertise in SQL and hands-on experience with PostgreSQL / MySQL. - Familiarity with NoSQL databases like MongoDB or Redis. - Knowledge of RESTful APIs, OAuth2/JWT, and API security best practices. - Hands-on experience with Docker, Git, and CI/CD pipelines. - Familiarity with cloud platforms like AWS, GCP, or Azure is a plus. - Good understanding of software design patterns and architecture principles. ✅ Preferred Qualifications: - Bachelor's/Master's degree in Computer Science, Information Technology, or related fields. - Experience working in Agile/Scrum teams. - Exposure to Kafka, RabbitMQ, or similar messaging systems. - Experience with Unit Testing, Integration Testing, and Load Testing tools. 🧩 Soft Skills: - Strong problem-solving and analytical skills. - Excellent communication and teamwork abilities. - Ability to manage time effectively and deliver tasks within deadlines.

Posted 3 days ago

Apply

6.0 - 9.0 years

10 - 18 Lacs

Bengaluru

Hybrid

Naukri logo

Experience: 6 to 9 years Location: Bangalore Notice Period : immediate or 15 days Senior Java Developer (Backend) Experience : 6+ years. Good knowledge of Java 8 (above versions) with hands-on experience. Expert level programming skills in Java. Excellent Experience with Java frameworks such as Spring Framework (including Spring Boot, Spring MVC) Good understanding of build tools like Maven and GIT. Hands on with debugging code and troubleshooting Experienced with dev-ops, CI-CD, Git and Agile Methodologies. Good to have: Experience with any message-driven distributed system (ActiveMq or Kafka)

Posted 3 days ago

Apply

6.0 - 9.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

About The Role Grade Level (for internal use): 10 The Role: Senior Quality Engineer The Team Quality Engineering team works in partnership with other functions in Technology & the business to deliver quality products by providing software testing services and quality assurance, that continuously improve our customer’s ability to succeed. The team is independent in driving all decisions and is responsible for the architecture, design and quick turnaround in development of our products with high quality. The team is located globally. The Impact You will ensure the quality of our deliverable meets and exceeds the expectations of all stakeholders and evangelize the established quality standards and processes. Your challenge will be reducing the “the time to market” for products without compromising the quality, by leveraging technology and innovation. These products are directly associated to revenue growth and operations enablement. You strive to achieve personal objectives and contribute to the achievement of team objectives, by working on problems of varying scope where analysis of situations and/or data requires a review of a variety of factors. What’s in it for you Do you love working every single day testing enterprise-scale applications that serve a large customer base with growing demand and usage? Be the part of a successful team which works on delivering top priority projects which will directly contribute to Company’s strategy. You will use a wide range of technologies and have the opportunity to interact with different teams internally. You will also get a plenty of learning and skill-building opportunities with participation in innovation projects, training and knowledge sharing. You will have the opportunity to own and drive a project end to end and collaborate with developers, business analysts and product managers who are experts in their domain which can help you to build multiple skillsets. Responsibilities: Understand application architecture, system environments (ex: shared resources, components and services, CPU, memory, storage, network, etc.) to troubleshoot production performance issues. Ability to perform scalability & capacity planning. Work with multiple product teams to design, create, execute, and analyze performance tests; and recommend performance turning. Support remediating performance bottlenecks of application front-end and database layers. Drive industry best practices in methodologies and standards of performance engineering, quality and CI/CD process. Understand user behaviors and analytics models and experience in using Kibana and Google analytics Ensure optimally performing production applications by establishing application and transaction SLAs for performance, implementing proactive application monitoring, alarming and reporting, and ensuring adherence to and measurement against defined SLA. Analyzes, designs and develops performance specifications and scripts based on workflows. Ability to interpret Network/system diagram, results of performance tests and identify improvements. Leverage tools and frameworks to develop performance scripts with quality code to simplify testing scenarios Focus on building efficient solutions for Web, Services/APIs, Database, mobile performance testing requirements. Deliver projects in the performance testing space and ensure delivery efficiency. Define testing methodologies & implement tooling best practices for continuous improvement and efficiency Understand business scenarios in depth to define workload modelling for different scenarios Compliment architecture community by providing inputs & pursue implementation suggested for optimization Competency to manage testing for highly integrated system with multiple dependencies and moving parts. Active co-operation/collaboration with the teams at various geographic locations. Provide prompt response and support in resolving critical issues (along with the development team). May require after hours/weekend work for production implementations What we’re looking for: Proficient with software development lifecycle (SDLC) and Software Testing techniques in an Agile/Scrum framework. Bachelor's/PG degree in Computer Science, Information Systems or equivalent 6-9 years of experience in Performance testing/Engineering or development with good understanding of performance testing concepts Experience in performance testing tools like Microfocus Storm Runner/ LoadRunner/Performance Center, JMeter. Protocol : Web(HTTP/HTML) , Ajax Truclient, Citrix, .Net Programming Language : Java, C#, .Net, Python Working Experience in CI/CD for performance testing. Debugging tools: Dev Tools, Network Sniffer and Fiddler etc. Experience in monitoring, profiling and tuning tools e.g. CA Wily Introscope, AppDynamics, Dynatrace, Datadog, Splunk etc. Experience in Databases / SQLs (e.g. SQl Server, Cassandra, Elastic Search, Postgres, MongoDB) Experience in message brokers (e.g. Kafka) Good knowledge in Cloud computing platforms (AWS, Azure), Containers (Docker) Web/UI Javascript frameworks (e.g. AngularJS, NodeJS, ReactJS) Experience in gathering Non-Functional Requirements (NFR) & strategy to achieve NFR and developing test plans Experience in testing and optimizing high volume web and batch-based transactional enterprise applications. Experience testing with containers, cloud, virtualization, and configuration management. Demonstrate outstanding flexibility and leadership with communication of performance test result interpretation and explanation to both IT and Business Users. Strong communication skills and ability to produce clear, concise and detailed documentation Excellent problem solving, analytical and technical troubleshooting skills. Experience in refactoring test performance suites as necessary Experience working with SOAP and REST service and understanding of SOA architecture Preferred Qualifications: Bachelor's or higher degree in technology related field. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our Benefits Include Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring And Opportunity At S&P Global At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 316147 Posted On: 2025-06-16 Location: Gurgaon, Haryana, India Show more Show less

Posted 3 days ago

Apply

9.0 - 14.0 years

32 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

About The Role Job Title: SDE-2/3 LocationMumbai/Bangalore/Hyderabad Experience range3 to 12 years What we offer: Our mission is simple Building trust. Our customer's trust in us is not merely about the safety of their assets but also about how dependable our digital offerings are. That"™s why, we at Kotak Group are dedicated to transforming banking by imbibing a technology-first approach in everything we do, with an aim to enhance customer experience by providing superior banking services. We welcome and invite the best technological minds in the country to come join us in our mission to make banking seamless and swift. Here, we promise you meaningful work that positively impacts the lives of many. What we ask for: Design, develop and implement software solutions. Solve business problems through innovation and engineering practices. Involved in all aspects of the Software Development Lifecycle (SDLC) including analyzing requirements, incorporating architectural standards into application design specifications, documenting application specifications, translating technical requirements into programmed application modules, and developing or enhancing software application modules. Identify or troubleshoot application code-related issues. Take active role in code reviews to ensure solutions are aligned to pre- defined architectural specifications. Assist with design reviews by recommending ways to incorporate requirements into designs and information or data flows. Participate in project planning sessions with project managers, business analysts, and team members to analyze business requirements and outline proposed solutions. Qualifications: Should have strong technical background in JAVA, J2EE or Python, Spring stack Well versed with OOP"™s concept and design patterns Good understanding of data structure and algorithms Strong experience with Database systems like RDBMS (PostgreSQL, Oracle etc.) and NoSQL (Dynamo, MongoDB etc.) Experience in building Microservices and knowledge of workflow orchestration with Camunda or Temporal Knowledge of docker and containerization. Should have good experience in using messaging platforms like Kafka, RabbitMQ, etc. Knowledge in CI/CD Pipeline and Dev Ops tools Knowledge in Cloud Services such as AWS or Azure Should be familiar with Domain Driven Design Passionate and having depth knowledge in agile, Kanban process Should be able to communicate effectively with stakeholders Manage scope, timelines, quality, goals and deliverables that supports business Good communications skills Prior work experience in the product engineering/development. Good to have prior experience in Indian Banking segment and/or Fintech. Education background: Bachelor"™s degree in Computer Science, Information Technology or related field of study Good to have Certifications Java Certified Developer AWS Developer or Solution Architect Experience range required 3-12 Years

Posted 3 days ago

Apply

2.0 - 4.0 years

10 - 14 Lacs

Pune

Hybrid

Naukri logo

So, what’s the role all about? We are seeking a skilled and experienced Developer with expertise in .net programming along with knowledge on LLM and AI to join our dynamic team. As a Contact Center Developer, you will be responsible for developing and maintaining contact center applications, with a specific focus on AI functionality. Your role will involve designing and implementing robust and scalable AI solutions, ensuring efficient agent experience. You will collaborate closely with cross-functional teams, including software developers, system architects, and managers, to deliver cutting-edge solutions that enhance our contact center experience. How will you make an impact? Develop, enhance, and maintain contact center applications with an emphasis on copilot functionality. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Perform system analysis, troubleshooting, and debugging to identify and resolve issues. Conduct regular performance monitoring and optimization of code to ensure optimal customer experiences. Maintain documentation, including technical specifications, system designs, and user manuals. Stay up to date with industry trends and emerging technologies in contact center, AI, LLM and .Net development and apply them to enhance our systems. Participate in code reviews and provide constructive feedback to ensure high-quality code standards. Deliver high quality, sustainable, maintainable code. Participate in reviewing design and code (pull requests) for other team members – again with a secure code focus. Work as a member of an agile team responsible for product development and delivery Adhere to agile development principles while following and improving all aspects of the scrum process. Follow established department procedures, policies, and processes. Adheres to the company Code of Ethics and CXone policies and procedures. Excellent English and experience in working in international teams are required. Have you got what it takes? BS or MS in Computer Science or related degree 2-4 years’ experience in software development. Strong knowledge of working and developing Microservices. Design, develop, and maintain scalable .NET applications specifically tailored for contact center copilot solutions using LLM technologies. Good understanding of .Net and design patterns and experience in implementing the same Experience in developing with REST API Integrate various components including LLM tools, APIs, and third-party services within the .NET framework to enhance functionality and performance. Implement efficient database structures and queries (SQL/NoSQL) to support high-volume data processing and real-time decision-making capabilities. Utilize Redis for caching frequently accessed data and optimizing query performance, ensuring scalable and responsive application behavior. Identify and resolve performance bottlenecks through code refactoring, query optimization, and system architecture improvements. Conduct thorough unit testing and debugging of applications to ensure reliability, scalability, and compliance with specified requirements. Utilize Git or similar version control systems to manage source code and coordinate with team members on collaborative projects. Experience with Docker/Kubernetes is a must. Experience with cloud service provider, Amazon Web Services (AWS) Experience with AWS Could on any technology (preferred are Kafka, EKS, Kubernetes) Experience with Continuous Integration workflow and tooling. Stay updated with industry trends, emerging technologies, and best practices in .NET development and LLM applications to drive innovation and efficiency within the team. You will have an advantage if you also have: Strong communication skills Experience with cloud service provider like Amazon Web Services (AWS), Google Cloud Engine, Azure or equivalent Cloud provider is a must. Experience with ReactJS. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NiCE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NiCEr! Enjoy NiCE-FLEX! At NiCE, we work according to the NiCE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7442 Reporting into: Sandip Bhattcharjee Role Type: Individual Contributor

Posted 3 days ago

Apply

Exploring Kafka Jobs in India

Kafka, a popular distributed streaming platform, has gained significant traction in the tech industry in recent years. Job opportunities for Kafka professionals in India have been on the rise, with many companies looking to leverage Kafka for real-time data processing and analytics. If you are a job seeker interested in Kafka roles, here is a comprehensive guide to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Gurgaon

These cities are known for their thriving tech industries and have a high demand for Kafka professionals.

Average Salary Range

The average salary range for Kafka professionals in India varies based on experience levels. Entry-level positions may start at around INR 6-8 lakhs per annum, while experienced professionals can earn between INR 12-20 lakhs per annum.

Career Path

Career progression in Kafka typically follows a path from Junior Developer to Senior Developer, and then to a Tech Lead role. As you gain more experience and expertise in Kafka, you may also explore roles such as Kafka Architect or Kafka Consultant.

Related Skills

In addition to Kafka expertise, employers often look for professionals with skills in: - Apache Spark - Apache Flink - Hadoop - Java/Scala programming - Data engineering and data architecture

Interview Questions

  • What is Apache Kafka and how does it differ from other messaging systems? (basic)
  • Explain the role of Zookeeper in Apache Kafka. (medium)
  • How does Kafka guarantee fault tolerance? (medium)
  • What are the key components of a Kafka cluster? (basic)
  • Describe the process of message publishing and consuming in Kafka. (medium)
  • How can you achieve exactly-once message processing in Kafka? (advanced)
  • What is the role of Kafka Connect in Kafka ecosystem? (medium)
  • Explain the concept of partitions in Kafka. (basic)
  • How does Kafka handle consumer offsets? (medium)
  • What is the role of a Kafka Producer API? (basic)
  • How does Kafka ensure high availability and durability of data? (medium)
  • Explain the concept of consumer groups in Kafka. (basic)
  • How can you monitor Kafka performance and throughput? (medium)
  • What is the purpose of Kafka Streams API? (medium)
  • Describe the use cases where Kafka is not a suitable solution. (advanced)
  • How does Kafka handle data retention and cleanup policies? (medium)
  • Explain the Kafka message delivery semantics. (medium)
  • What are the different security features available in Kafka? (medium)
  • How can you optimize Kafka for high throughput and low latency? (advanced)
  • Describe the role of a Kafka Broker in a Kafka cluster. (basic)
  • How does Kafka handle data replication across brokers? (medium)
  • Explain the significance of serialization and deserialization in Kafka. (basic)
  • What are the common challenges faced while working with Kafka? (medium)
  • How can you scale Kafka to handle increased data loads? (advanced)

Closing Remark

As you explore Kafka job opportunities in India, remember to showcase your expertise in Kafka and related skills during interviews. Prepare thoroughly, demonstrate your knowledge confidently, and stay updated with the latest trends in Kafka to excel in your career as a Kafka professional. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies